Differences between the classic and the Azure Resource Manager deployment model

A lot of times I got these questions, where is should create my resources? On the classic portal or on the ARM portal? What is the difference? Is it only the URL? Why if I choose to create a VM on classic within the same region I cannot use my network created on the ARM portal or vice-versa?

All these questions are valid, but someone of them are misconception of the way that Azure works or has been setup. On older post I cover the difference on creating virtual machines between the ASM Portal (Classic) and the ARM Portal (see here), however there is a lot more than virtual machines in Azure.

Although most of the general networking principles outlined in the previous post (see here) apply to Azure virtual networks regardless of their deployment model, there are some differences between Azure Resource Manager and classic virtual networks.

In the classic deployment model, network characteristics of virtual machines are determined by:

  • A mandatory cloud service that serves as a logical container for virtual machines.
  • An optional virtual network that allows you to implement direct connectivity among virtual machines in different cloud services and to on-premises networks. In particular, cloud services support virtual network placement but do not enforce it. As a result, you have the option of deploying a cloud service without creating a new virtual network or without using an existing one.

In the Azure Resource Manager model, network characteristics of virtual machines include that:

  • There is no support for cloud services. To deliver the equivalent functionality, the Azure Resource Manager model provides a number of additional resource types. In particular, to implement load balancing and NAT, you can implement an Azure load balancer. To allow connectivity to a virtual machine, you must create and attach it to a virtual network adapter. While this increases to some extent the complexity of the provisioning resources, it offers significant performance and flexibility benefits. In particular, you can deploy your solutions much faster than when using the classic deployment model.
  • A virtual machine must reside within a virtual network. A virtual machine attaches to a virtual network by using one or more virtual network interface cards.

Note: A load balancer constitutes a separate Azure Resource Manager resource, while in the classic deployment model it is part of the cloud service in which load-balanced virtual machines reside. Similarly, a network interface is an inherent part of a classic virtual machine, but Azure Resource Manager allows you to manage it separately, including detaching it from one virtual machine and attaching it to another. The same logic applies to a public IP address. In particular, every cloud service has at least one automatically assigned public IP address. However, public IP address assignment is optional with Azure Resource Manager. Due to the lack of support for cloud services in the Azure Resource Manager deployment model, you have the choice of associating it with either Azure load balancers or network adapters.

The following table summarizes the primary differences between the classic deployment model and the Azure Resource Manager model from the networking standpoint.

Item Azure Service Management Azure Resource Manager
Azure Cloud Services for virtual machines The cloud service is a mandatory container for virtual machines and associated objects. The cloud service does not exist.
Load balancing The cloud service functions as a load balancer for infrastructure as a service (IaaS) resources within Azure Cloud Services. The load balancer is an independent resource. You can associate a network adapter that is attached to a virtual machine with a load balancer.
Virtual IP address (VIP) The platform automatically assigns a VIP to a cloud service upon its creation. You use this IP address to allow connectivity to virtual machines within the cloud service from Internet or from Azure-resident services. You have the option of assigning a public IP to a network adapter or a load balancer.
Reserved IP address You can reserve an IP address in Azure and then associate it to a cloud service to ensure that its VIP remains constant. Static mode public IP addresses provide the same capability as reserved IP addresses.
Public IP address per virtual machine You can assign public IP addresses to a virtual machine directly. You can assign public IP addresses to a network interface attached to a virtual machine.
Endpoints You can allow external connections to virtual machines by configuring endpoints of the cloud service. You can access a virtual machine by using its public IP address. Alternatively, you can provide access to a virtual machine on a specific port by configuring inbound NAT rules on a load balancer associated with the network adapter attached to the virtual machine.
DNS name Every cloud service has a public DNS name in the cloudapp.net namespace, such as:

mdnogadev.cloudapp.net

The DNS name associated with a public IP address of a virtual machine or a load balancer is optional. The FQDN includes the Azure region where the load balancer and the virtual machine reside, such as:

mdnogavm1.westus.cloudapp.azure.com.

Network interfaces You define the primary and secondary network interfaces within the configuration of a virtual machine. The network interface is an independent resource that is persistent in the Azure environment. You can attach it to, and detach it from, virtual machines without losing its identity and configuration state. Its lifecycle does not have to depend on the lifecycle of a virtual machine.

 

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

How to extend Azure Service Fabric to on-premise?

You can deploy a Service Fabric cluster on any physical or virtual machine running the Windows Server operating system, including ones residing in your on-premises datacenters. You use the standalone Windows Server package available for download from the Azure online documentation website (see here).

To deploy an on-premises Service Fabric cluster, perform the following steps:

  1. Plan for your cluster infrastructure. You must consider the resiliency of the hardware and network components and the physical security of their location because in this scenario, you can no longer rely on the resiliency and security features built in to the Azure platform.
  2. Prepare the Windows Server computer to satisfy the prerequisites. Each computer must be running Windows Server 2012 or Windows Server 2012 R2, have at least 2 GB of RAM, and have the Microsoft .NET Framework 4.5.1 or later. Additionally, ensure that the Remote Registry service is running.
  3. Determine the initial cluster size. At a minimum, the cluster must have three nodes, with one node per physical or virtual machine. Adjust the number of nodes according to your projected workload.
  4. Determine the number of fault domains and upgrade domains. The assignment of fault domains should reflect the underlying physical infrastructure and the level of its resiliency to a single component failure. Typically, you consider a single physical rack to constitute a single fault domain. The assignment of upgrade domains should reflect the number of nodes that you plan to take down during application or cluster upgrades.
  5. Download the Service Fabric standalone package for Windows Server.
  6. Define the cluster configuration. Specify the cluster settings before provisioning. To do this, modify the ClusterConfig.json file included in the Service Fabric standalone package for Windows Server. The file has several sections, including:
    o    NodeTypes. NodeTypes allow you to divide cluster nodes into groups represented by a node types and assign common properties to the nodes within each group. These properties specify endpoint ports, placement constraints, or capacities.
    o    Nodes. Nodes determine the settings of individual cluster nodes, such as the node name, IP address, fault domain, or upgrade domain.
  7. Run the create cluster script. After you modify the ClusterConfig.json file to match your requirements and preferences, run the CreateServiceFabricCluster.ps1 Windows PowerShell script and reference the .json file as its parameter. You can run the script on any computer with connectivity to all the cluster nodes.
  8. Connect to the cluster. To manage the cluster, connect to it via http://IP_Address_of_a_node:19080/Explorer/index.htm, where IP_Address_of_a_node is the IP address of any of the cluster nodes.

 

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

The difference between Azure Virtual Machines and Azure Cloud Services?

Azure offers several compute hosting options for integrating on-premises workloads in Azure.  On a previous post (see here), I described the difference between the both deployment methods, Azure Resource Manager and Azure Service Manager (Classic). On this post, I will focus on the difference between Azure virtual machines and Azure Cloud Services, because they serve as the basis on integration solutions.

Azure virtual machines

Azure virtual machines provide the greatest degree of control over the virtual machine operating system. You can arbitrarily configure an Azure virtual machine and install almost any third-party software as long as you do not violate the restrictions. Every virtual machine has at least one fixed disk with up to 64 data disks, which persist content across restarts.

You can provision Azure virtual machines by using ether the classic or the Azure Resource Manager deployment model. When using the Azure Resource Deployment model, you must deploy Azure virtual machines into an Azure virtual network.

Because you have complete control over the virtual machine at the operating system level, you are responsible for maintaining the operating system. The responsibilities include installing software updates from the operating system vendor, performing backups, and implementing resiliency to provide a sufficient level of business continuity.

When using the classic deployment model to horizontally scale Azure virtual machines, you must pre-provision additional Azure virtual machines and keep them offline until you are ready to scale out. With the Azure Resource Manager deployment model, you have the option of using virtual machine scale sets for horizontal scaling.

Azure virtual machines are best suited for hosting:

  • Windows Server or Linux infrastructure servers, such as Active Directory domain controllers or Domain Name System (DNS) servers.
  • Highly customized app servers for which the setup involves a complex configuration.
  • Stateful workloads that require persistent storage, such as database servers.

Azure Cloud Services

Azure Cloud Services allows you to manage the virtual machine operating system. However, because Azure Cloud Services uses temporary storage, any change you directly apply does not persist across restarts. The virtual disks automatically provision whenever you start the service, based on the custom code and configuration files you provide. Moreover, you are not responsible for maintaining the operating system updates. Business continuity is part of the service, with the code and configuration automatically replicated across multiple locations.

Azure Cloud Services supports only the classic deployment model. You can deploy virtual disks in a virtual network, but you must provision such a network by using the classic deployment model.

Azure Cloud Services offers superior horizontal scaling capabilities when compared with Azure virtual machines. It can scale to thousands of instances, which the Azure platform automatically provisions based on criteria you define. In addition, it simplifies the development of solutions that consist of multiple tiers. In a typical implementation, a cloud service contains a web role and a worker role. The web role contains virtual machines that provide front-end functionality. The worker role manages the processing of background tasks. Both roles can scale independently of each other.

Azure cloud services are best suited for hosting:

  • Multitiered web apps.
  • Stateless apps that require a highly scalable, high-performance environment.

 

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

Virtual Machines: Hyper-V vs Azure IaaS

A lot of times, during the Azure Foundation workshops session that I usually deliver, the difference between Azure virtual machines and Hyper-V virtual machines comes up. So I decide to write this post explaining what are the most important differences between them. Sometimes, depending on the workload, it doesn’t make sense to move to the cloud. I hope this post will clarify you on that.

Because Azure uses Hyper‑V as the virtualization platform, Azure virtual machines share the majority of their characteristics with the Hyper‑V virtual machines you deploy in your on-premises datacenter. However, several important differences exist between them:

  • Azure virtual machines that you can provision are available in specific sizes. You do not have the option of specifying arbitrary processing, memory, or storage parameters when deploying a virtual machine. Instead, you must select one of the predefined choices. At the present time, Microsoft offers virtual machines in two tiers: basic and standard. The basic tier, intended for development and test workloads, includes five virtual machine sizes, ranging from 1 core with 0.75 gigabytes (GB) of RAM to 8 cores with 14 GB of RAM. The standard tier has several series, including A, D, Dv2, DS, DSv2, F, Fs, G, GS, NV, and NC, for a total of 74 virtual machine sizes, with the largest one featuring 32 cores, 448 GB of RAM, and up to 64 disks.
  • There is a 1 TB size limit on a virtual disk that you can attach to an Azure virtual machine. This does not imply a limit on the size of operating system volumes. You can create multiple-disk volumes by using Storage Spaces in Windows Server or volume managers in Linux. By following this approach, you can create volumes of up to 64 TB. The maximum volume size depends on the size of the virtual machine, which determines the maximum number of disks you can attach to that virtual machine.
  • A limit also exists on the throughput and input/output operations per second (IOPS) that is supported by individual disks. With standard storage, you should expect about 60 megabytes per second (MBps) or 500 8-kilobyte (KB) IOPS. With Azure Premium Storage, performance depends on the disk size, with 1-TB disks supporting up to 200 MBps and 5,000 256-KB IOPS. You can increase the overall throughput and IOPS of Azure virtual machines by creating multiple-disk volumes.
  • At the present time, any virtual disks that you intend to attach to Azure virtual machines must be in the .vhd format. You do not have the option of Generation 2 Hyper‑V virtual machines in Azure. Additionally, no support exists for dynamically expanding or differencing virtual disks—they all must be fixed.

Note: Azure Site Recovery helps you to protect on-premises Generation 2 virtual machines with virtual disks in the .vhdx format. You accomplish this by performing an automatic conversion to Generation 1 virtual machines with .vhd-formatted virtual disks when uploading the virtual disks to an Azure storage account (see the previous post about ASR).

  • Azure virtual machines place exclusive locks on attached virtual disk files. You cannot provide multiple virtual machines with shared access to the same virtual disk. As a result, you cannot implement clustering configurations that depend on shared storage to establish a quorum. You can, however, implement clustering configurations that use the node majority to establish a quorum. For example, you use this approach when deploying AlwaysOn Availability Groups in Microsoft SQL Server for failover clustering in Azure virtual machines.

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

How to move your workloads to Azure

When integrating your on-premises environment with Azure, you might want to use the lift-and-shift approach to migrate some of your existing workloads. There are several ways to do it. You can use Azure Site Recovery to accomplish that (see here) or use one of common way to implement this approach, by capture the content of disks of physical or virtual machines residing in your datacenter, upload them to Azure storage, and provision new Azure virtual machines based on the captured disks.

This last option is used when your hypervisor is Hyper-V, otherwise you must convert those workloads from the hypervisor that is running to Hyper-V. There are several tools that you can use to convert those workloads.

When you migrate your on-premises workloads to Azure virtual machines, you can select from a wide range of available sizes and configuration options to match performance characteristics between the two environments. You must also consider deployment factors that are specific to Azure during the transition.

Virtual machine sizes in Azure

Microsoft organizes virtual machine sizes into several categories to help you select the option best matching your deployment workload. First, you must choose one of the following two tiers:

  • Use only for test and development workloads. This is because of its limited disk throughput of 300 IOPS and lack of support for load balancing or auto-scaling. The basic tier has five virtual machine sizes.
  • Use for any production workload. By selecting the standard tier, you can choose from over 70 virtual machine sizes and use features such as Premium Storage, virtual machine scale sets, and Azure Load Balancer.

The standard tier has several subcategories, or series, including the following:

  • A-series with general-purpose compute instances. This series contains eight virtual machine sizes, ranging from A0 through A7. It is an economical option for simple production workloads.
  • A-series with compute-intensive instances. This series contains four virtual machine sizes, ranging from A8 through A11. It supports compute-intensive and network-intensive workloads, such as simulation and modeling tasks running on HPC clusters.
  • D-series. The series consists of eight virtual machine sizes, ranging from D1 through D4 and from D11 through D14. Their primary benefit is the local solid-state drive (SSD) storage on the Hyper‑V hosts where the virtual machines are running. Azure allocates this storage to the temporary volume on each virtual machine, which you can use, for example, to store an operating system swap file or a SQL Server tempdb.
  • Dv2-series. This virtual machine series matches the disk and memory characteristics of the D-series virtual machines but offers processors that are roughly 35 percent faster. The series consists of 10 virtual machine sizes, ranging from D1v2 through D5v2 and from D11v2 through D15v2.
  • F-series. The series consists of five virtual machine sizes, including F1, F2, F4, F8, and F16. It provides optimized performance for compute-intensive workloads, matching the CPU characteristics of the Dv2-series virtual machines but with less memory and smaller disks, which translate into a lower price.
  • G-series. This series consists of five virtual machine sizes, ranging from G1 through G5. These virtual machines offer the highest memory-per-CPU ratio, with G5 containing 448 GB of RAM, which makes it the largest virtual machine size currently offered by any major cloud provider.
  • DS-series, DSv2-series, Fs-series, and GS-series. Virtual machine sizes in these series match their respective D, Dv2, F, and G-series virtual machine sizes with one significant exception—they all support Premium Storage. This allows you to implement virtual machines hosting workloads that require high-performance, low-latency access to their disks.
  • N-series. This series is in preview at the time of this writing. It consists of three NV virtual machine sizes, including NV6, NV12, and NV24 and three NC virtual machine sizes, including NC6, NC12, and NC24. The N in their names designates the NVIDIA graphics processing unit (GPU), which enhances the performance of graphics-intensive workloads that this series of virtual machines is intended for.

For more information about virtual machine sizes, refer to Sizes for virtual machines in Azure.

Azure Pricing Calculator

To better understand the usage of your existing on-premises infrastructure and estimate the cost of running it on Azure, you can use the Azure Pricing Calculator. The tool scans the hardware and resource utilization with the duration and frequency you specify, and based on the collected data, it recommends the Azure virtual machine sizes you should choose and the corresponding estimated cost of running your workloads in Azure over 30 days.

The tool can scan any of the following machine types:

  • Microsoft virtualization technologies (System Center Virtual Machine Manager, Hyper‑V)
  • VMware virtualization technologies (vCenter, ESXi)
  • Physical infrastructure (Windows, Linux)

You can install the tool on any of the following operating systems:

  • Windows Server 2012 or later
  • Windows Server 2008 R2 SP1
  • Windows Server 2008 with Service Pack 2 (SP2)
  • Windows 10
  • Windows 8.1
  • Windows 8
  • Windows 7 SP1
  • Windows Vista SP2

Note: When planning to move virtual machine workloads to Azure, consider using Azure Site Recovery. Azure Site Recovery automatically converts Generation 2 Hyper‑V virtual machines to Generation 1 when uploading them to Azure Storage. For more information, refer to the post, How to implement Azure Site Recovery

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga