Load Balanced and Availability Set with multiple VMs

When it comes to best practices to how to setup multiple virtual machines using a load balanced and availability set, the information out there is either outdated or hard to find.

What is the scenario? Imagine that you need to set a few VMs that need to be shared the configuration and some files between them. How you could do it?

After a few searches on the web, I come across with the IIS and Azure Files blog post. Although this post is dated of October 2015, and as you know, Azure is changing in a very fast pace. My first though was, is this still applicable? After a few tests on my test environment, I found that it’s! Surprisingly! So, if you follow all the steps in the post you may configured your environment.

In my case, there was a specific requirement that this approach wasn’t applicable. My workloads required low latency. So, I went again searching how I could achieve this. And then I found the solution on GitHub! Microsoft publish a template that the only thing you need is fill the blanks. THANK YOU!

This is the template that I’m referring too, 201-vmss-win-iis-app-ssl.

Solution overview and deployed resources

This template will create the following Azure resources

  1. A VNet with two subnets. The VNet and the subnet IP prefixes are defined in the variables section i.e. appVnetPrefix, appVnetSubnet1Prefix & appVnetSubnet2Prefix respectively. Set these two accordingly.
  2. A NSG to allow http, https and rdp access to the VMSS. The NSG is assigned to the subnets.
  3. Two NICs, two Public IPs and two VMSSs with Windows Server 2012 R2
    3.1) The first VMSS is used for hosting the WebSite and the 2nd VMSS is used for hosting the Services (WebAPI/WCF etc.) 3.2) The VMSSs are load balanced with Azure load balancers. The load balancers are configured to allow RDP access by port ranges 3.3) The VMSSs are configured to auto scale based on CPU usage. The scaled out instances are automatically configured with Windows features, application deployment packages, SSL Certificates, the necessary IIS sites and SSL bindings
  4. The 1st VMSS is deployed with a pfx certificate installed in the specified certificate store. The source of the certificate is stored in an Azure Key Vault
  5. The DSC script configures various windows features like IIS/Web Role, IIS Management service and tools, .Net Framework 4.5, Custom login, request monitoring, http tracking, windows auth, application initialization etc.
  6. DSC downloads Web Deploy 3.6 & URL Rewrite 2.0 and installs the modules
  7. DSC downloads an application deployment package from an Azure Storage account and installs it in the default website
  8. DSC finds the certificate from the local store and create a 443 binding
  9. DSC creates the necessary rules, so any incoming http traffic gets automatically redirected to the corresponding https end points

The following resources are deployed as part of the solution

A VNet with two subnet

The VNet and the subnet IP prefixes are defined in the variables section i.e. appVnetPrefix, appVnetSubnet1Prefix & appVnetSubnet2Prefix respectively. Set these two accordingly.

  • NSG to define the security rules – It defines the rules for http, https and rdp acces to the VMSS. The NSG is assigned to the subnets
  • Two NICs, two Public IPs and two VMSSs with Windows Server 2012 R2
  • Two Azure load balancers one each for the VMSSs
  • A Storage accounts for the VMSS as well as for the artifacts

Prerequisites

  1. You should have a custom domain ready and point the custom domain to the FQDN of the first public IP/Public IP for the Web Load balancer
  2. SSL certificate: You should have a valid SSL certificate purchased from a CA or be self signed
  3. Create an Azure KeyVault and upload the certificate to the KeyVault. Currently, Azure KeyVault supports certificates in pfx format. If the certificates are not in pfx format then import those to a windows cert store on a local machine and then export those to a pfx format with embeded private key and root certificate.

 

Cheers,

Marcos Nogueira
Azure MVP
azurecentric.com
Twitter: @mdnoga

 

Creating a Point-to-Site VPN on Azure

To understand deeper what are the options you have to connect your organization with Azure, I recommend read this older post. On this post, I want to share what do you need to configure so you can implement a Point-to-Site (P2S) VPN between your organization individual PC and your Azure environment.

This is the typical process for creating and configuring a virtual network with point-to-site connectivity:

  1. Create the root and client certificates. Certificates facilitate authentication of the VPN tunnel. To create a root self-signed certificate, you can use the makecert.exe command-line tool to run the following command:
    makecert -sky exchange -r -n “CN=RootCertificateName” -pe -a sha1 -len 2048 -ss My “RootCertificateName.cer”
  2. Next, you need to generate client certificates. If you created a self-signed root certificate, you could use the same makecert.exe command-line tool with the following parameters:
    makecert.exe -n “CN=ClientCertificateName” -pe -sky exchange -m 96 -ss My -in “RootCertificateName” -is my -a sha1
    This command creates a client certificate and stores it in your user account’s personal certificate store on the local computer. You can create as many client certificates as needed by using this same command with different values of the –n parameter. I recommend that you create unique client certificates for each VPN client. This allows you to revoke these certificates on a per user basis. After you create the client certificates, export them in the Personal Exchange File (.pfx) format and import them into the Personal certificate store on the user’s computers for each user that will be using the point-to-site VPN.
  3. Create a dynamic routing gateway. A gateway is a mandatory component for a point-to-site VPN connection. You will need to create a corresponding subnet named GatewaySubnet hosting the gateway as well as define a VPN client IP address pool. You will also need to request a dynamically allocated public IP address. Provisioning a new point-to-site VPN gateway takes usually takes up to 15 minutes.
  4. Download and install the VPN client software. After you configure a dynamic gateway and certificates, you will see a link to download a VPN client for a supported operating system. Download the appropriate VPN client (32-bit or 64-bit), and install it on client computers that will be initiating a VPN connection. These are the same computers onto which you installed the client certificates in the first step.

Note: At this present time, the Azure portal does not support creation of a point-to-site virtual network.

Creating a point-to-site connection

The following procedure describes how to create a virtual network and configure a point-to-site virtual network connection by using Azure PowerShell commands.

Configure Azure prerequisites for a point-to site connection

To configure Azure prerequisites for a point-to-point site connection:

  1. Start Azure PowerShell and sign in to your subscription, type the following command, and then press Enter:
    Login-AzureRMAccount
  2. If there are multiple subscriptions associated with your account, select the target subscription in which you are going to create a virtual network, and configure a point-to-site VPN, type the following command, and then press Enter:
    Select-AzureRmSubscription –SubscriptionId <SUBSCRIPTION_ID>
  3. Create a new resource group, type the following command, and then press Enter:
    New-AzureRMResourceGroup –Name P2S-RG –Location westus
  4. Create a new VNet named VNet1 and an address space (for example, 10.10.0.0/16), type the following command, and then press Enter:
    $vnet = New-AzureRMVirtualNetwork –ResourceGroupName P2S-RG –Name Vnet1 –AddressPrefix 10.0.0.0/12 –Location westus
  5. Add a front-end subnet to the new virtual network, type the following command, and then press Enter:
    Add-AzureRmVirtualNetworkSubnetConfig -Name FrontEnd -VirtualNetwork $vnet -AddressPrefix 10.11.0.0/16
  6. Add a gateway subnet to the new virtual network, type the following command, and then press Enter:
    Add-AzureRmVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet -AddressPrefix 10.15.255.0/26
  7. Set a variable for the gateway virtual network subnet for which you will request a public IP address, type the following command, and then press Enter:
    $subnet= Get-AzureRMVirtualNetworkSubnetConfig –Name “GatewaySubnet” –virtualnetwork $vnet
  8. Request a dynamically assigned IP address, type the following command, and then press Enter:
    $pip = New-AzureRMPublicIPAddress –Name P2SGWPIP –ResourceGroupName P2S-RG –Location westus –AllocationMethod Dynamic
  9. Provide IP configuration that is required for the VPN gateway, type the following command, and then press Enter:
    $ipconfig= New-AzureRmVirtualNetworkGatewayIPConfig –Name GWIPConfig –Subnet $subnet –PublicIPAddress $pip
  10. Update the configuration of the virtual network, type the following command, and then press Enter:
    Set-AzureRMVirtualNetwork –VirtualNetwork $vnet

Create root and client certificates

You need to provision certificates to authenticate clients as they connect to the VPN gateway and to encrypt the resulting connection. You must generate a self-signed root certificate, upload it to the Azure portal, reference it to generate a client certificate, and then install the client certificate on your computer. To complete these tasks, use the following steps:

  1. For computers running Windows 10 you need to install the Windows 10 SDK, and then open the command prompt in the location where the makecert.exe tool is installed. On computers running the 64-bit version of Windows 10, the default installation location is the platform specific subfolder under the C:\Program Files (x86)\Windows Kits\10\bin folder. On computers running the 32-bit version of Windows 10, the default installation location is the platform specific subfolder under C:\Program Files\Windows Kits\10\bin.
  2. To generate the root certificate, type the following command at the command prompt, and then press Enter:
    makecert -sky exchange -r -n “CN=ContosoRootCertificate” -pe -a sha1 -len 2048 -ss My “ContosoRootCertificate.cer”
  3. In the location where you run the makecert tool, export the ContosoRootCertificate from the Personal certificate store into a Base-64 encoded string, and then store it in the variable $rootCert.
    $rootCer = Get-ChildItem -Path ‘Cert:\CurrentUser\My’ | Where-Object {$_.Subject -eq ‘CN=ContosoRootCertificate’}
    $rootCertText = [System.Convert]::ToBase64String($rootCer.RawData)
    $rootCert = New-AzureRmVpnClientRootCertificate –Name ContosoRootCert –PublicCertData [string]$rootCertText
  4. To prepare the root certificate for use as the Azure virtual network VPN root certificate, type the following command from the Windows PowerShell prompt, and then press Enter:
    $rootCert = New-AzureRmVpnClientRootCertificate –name ContosoRootCert –PublicCertData $rootCertString
  5. To generate the client certificate, type the following command at the command prompt, and then press Enter:
    makecert.exe -n “CN=ContosoClientCertificate” -pe -sky exchange -m 96 -ss My -in “ContosoRootCertificate” -is my -a sha1

Create an Azure VPN gateway

Point-to-site connections require a virtual gateway in the virtual network that routes traffic to client on-premises computers. You also need to prepare an IP address pool that you need to allocate to the client that uses the point-to-site VPN connection. In the command that follows, you use the “192.168.0.0/24” IP address range. To create the virtual gateway, type the following command, and then press Enter:

New-AzureRmVirtualNetworkGateway -Name ContosoGateway -ResourceGroupName P2S-RG -Location westus -IpConfigurations $ipconfig -GatewayType Vpn -VpnType RouteBased -EnableBgp $false -GatewaySku Standard -VpnClientAddressPool “192.168.0.0/24” -VpnClientRootCertificates $rootCert

Create and install the VPN client configuration package

To connect to the VPN, a client must use a client configuration package. This package must include the client certificate that you just created:

  1. To retrieve the URL link to download a VPN Client Configuration package for 64-bit VPN clients, type the following command, and then press Enter:
    Get-AzureRmVpnClientPackage -ResourceGroupName P2S-RG -VirtualNetworkGatewayName ContosoGateway -ProcessorArchitecture Amd64
  2. Copy the URL generated from the previous command, paste it into a browser, and then download and install the VPN package.

Connect to the VPN

After you have installed both the client certificate and the VPN client configuration package, you can connect to the virtual network. To do so:

  1. Navigate to the list of VPN connections and locate the VPN connection that you created. The name of the VPN connection will be the same as the name of the virtual network in Azure.
  2. Right-click the connection, and then click Connect.
  3. Click Continue, and then click Connect.

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

Options to connect your datacenter to Azure Virtual Networks – Part 4 – ExpressRoute

On the one of the previous post (see here) I briefly describe the options that you have to cross-premise connections to Azure. On this post, I want to explore more one of the options: ExpressRoute.

ExpressRoute delivers private, network-layer connectivity between on-premises networks and Microsoft Cloud, without crossing the Internet, in the form of:

  • Private peering. This includes connections to Azure virtual machines and Azure cloud services residing on Azure virtual networks. You have the ability to establish connectivity to multiple Azure virtual networks, with up to 10 virtual networks with the standard ExpressRoute offering and up to 100 virtual networks with the ExpressRoute Premium add-on.
  • Public peering. This includes connections to Azure services not accessible directly via Azure virtual networks, such as Azure Storage or Microsoft Azure SQL Database. With public peering you can ensure that traffic from on-premises locations to Azure public IP addresses does not cross the Internet. It also delivers predictable performance and latency when connecting to these IP addresses.
  • Microsoft peering. This includes connections to the Office 365 and Microsoft Dynamics CRM Online services.

Each of these peering arrangements constitutes a separate routing domain, but all of them are provisioned over the same physical connection. You have the option of combining them into the same routing domain, although the recommendation is to implement private peering between the internal network and Azure virtual networks, while limiting the scope of public peering and Microsoft peering to on-premises perimeter networks.

Each peering arrangement allows you to connect to all Azure regions in the same geopolitical region as the location of the ExpressRoute circuit. You can expand the scope of the connectivity globally by provisioning the ExpressRoute Premium add-on.

Note: The only Azure services not supported by public peering at the present time include:

  • Content Delivery Network (CDN)
  • Visual Studio Team Services load testing
  • Microsoft Azure Multi-Factor Authentication
  • Azure Traffic Manager

From the provisioning standpoint, besides implementing physical connections, you also need to create one or more logical ExpressRoute circuits. You can identify each individual circuit based on its service key (s-key), which takes the form of a globally unique identifier (GUID). A single circuit can support up to three routing domains (private, public, and Microsoft, as listed above). Each circuit has a specific nominal bandwidth associated with it, which can range between 50 Mbps and 10 Gbps, shared across the routing domains. You have the option to increase or decrease the amount of provisioned bandwidth without the need to re-provision the circuit.

In private peering scenarios, establishing connection to a target virtual network requires creating a link between the ExpressRoute circuit and the Azure VPN gateway attached to that virtual network. As the result, the effective throughput on a per-virtual network basis depends on the SKU of the VPN gateway:

  • Up to 500 Mbps. It does not support the coexistence of a site-to-site VPN and ExpressRoute.
  • Up to 1,000 Mbps. It supports the coexistence of a site-to-site VPN and ExpressRoute.
  • High Performance. Up to 2,000 Mbps. It supports the coexistence of a site-to-site VPN and ExpressRoute.

There are three ExpressRoute connectivity models:

  • A co-location in a facility hosting an ExpressRoute exchange provider. This facilitates private routing to Microsoft Cloud by using either Layer 2 or managed Layer 3 cross-connect with the exchange provider.
  • A Layer 2 or managed Layer 3 connection to an ExpressRoute point-to-point provider.
  • An any-to-any network (IPVPN) network, implemented commonly as a Multiprotocol Label Switching (MPLS) cloud, with a wide area network (WAN) provider handling Layer 3 connectivity to the Microsoft cloud.

Because ExpressRoute depends on having access to provider services, its availability depends on the customer location. For up-to-date information, refer to ExpressRoute partners and peering locations.

ExpressRoute routing is dynamic and relies on Border Gateway Protocol (BGP) route exchange between the on-premises environment and the Microsoft Cloud. You can advertise up to 4,000 prefixes (up to 10,000 with the ExpressRoute Premium add-in) within the private peering routing domain and up to 200 in the case of public peering and Microsoft peering. The prefixes that you advertise via BGP comprise one or more autonomous systems. Each autonomous system that relies on BGP route exchange has a corresponding autonomous system number (ASN). There are two types of ASNs: public and private. A public ASN is globally unique and supports exchanging routing information with any other autonomous system on the Internet. A private ASN is useful in scenarios that involve route exchange with a single provider only, which eliminates the requirement of global uniqueness. ExpressRoute requires a public ASN with all three peering scenarios.

To facilitate routing between your on-premises network and the Microsoft edge routers, you will need to designate several ranges of IP addresses. Specifics of this configuration depend to some extent on the peering arrangement, but:

  • You must choose a pair of /30 subnets or a /29 subnet for each peering type.
  • Each of the two /30 subnets will facilitate a separate BGP session. It is necessary to establish two sessions to qualify for the ExpressRoute availability SLA.
  • With private peering, you can use either private or public IP addresses. With public peering and Microsoft peering, public IP addresses are mandatory.

Some providers manage routing of ExpressRoute traffic as part of their managed services. Usually, however, when provisioning ExpressRoute via Layer 2 connectivity providers, routing configuration and management is the customer’s responsibility.

For more details about ExpressRoute routing requirements

Note: ExpressRoute does not support transitive routing between on-premises locations. If you need this functionality, you must implement it based on the services offered by your connectivity provider.

The cost of ExpressRoute depends primarily on the billing model that you choose when provisioning the service. At the time of authoring this course, there are three billing models:

  • Unlimited data. This set monthly fee covers the service as well as an unlimited amount of data transfers.
  • Metered data. This set monthly fee covers the service. There is an additional charge for outbound data transfers on a per GB basis. Prices depend on the zone where the Azure region resides.
  • ExpressRoute Premium add-in. This service extension provides additional ExpressRoute capabilities including:
    • An increased number of routes that can be advertised in the public and private peering scenarios, up to the 10,000-route limits.
    • Global connectivity to Microsoft Cloud from a circuit in an individual Azure region.
    • An increased number of virtual network links from an individual circuit, up to the 100 link limit.

When you evaluate the total cost of an ExpressRoute-based solution with private peering configuration, you should also take into account the cost of VPN gateways that will provide connectivity to individual virtual networks. As mentioned earlier, the cost of a VPN gateway depends on its SKU, with three pricing tiers: Basic, Standard, or High Performance.

From the resiliency standpoint, ExpressRoute circuits support a pair of connections between your network edge devices and Microsoft edge routers via a redundant infrastructure maintained by a connectivity provider. You must deploy redundant connections on your end of the circuit to qualify for the 99.9 percent circuit availability SLA. In private peering scenarios, each link to an individual virtual network is a subject to the 99.9 percent availability SLA applicable to the Azure VPN gateway.

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

Options to connect your datacenter to Azure Virtual Networks – Part 3 – Site-to-Site VPN

On the one of the previous post (see here) I briefly describe the options that you have to cross-premise connections to Azure. On this post, I want to explore more one of the options: Site-to-Site VPN.

Site-to-site VPNs rely on static routes to direct traffic between on-premises networks and Azure virtual networks. The Azure platform generates these routes when you create the site-to-site VPN connection based on two pieces of data: the IP address space that you assigned to the Azure virtual network and the local network, which you define in the process of setting up the VPN connection. The local network represents the IP address space of your on-premises networks.

Keep in mind that Azure implements the routing configuration of Azure virtual network. For cross-premises connectivity to function, you must update the on-premises routing configuration.

The site-to-site VPN method employs the IPSec protocol with a pre-shared key to provide authentication between the on-premises VPN gateway and the Azure VPN gateway. The key is an alphanumeric string between 1 and 128 characters.

From the infrastructure standpoint, in addition to a reliable connection to the Internet from your on-premises network, a site-to-site VPN requires a VPN gateway on each end of the VPN tunnel. On the Azure side, you provision a VPN gateway as part of creating a site-to-site VPN. Its characteristics depend on a couple of factors:

  • The VPN gateway SKU determines capacity and performance characteristics. There are two SKUs available in this case:
    • The Basic and Standard SKUs offers up to 100 Mbps throughput with maximum of 10 IPSec tunnels.
    • The High Performance SKU offers up to 200 Mbps throughput with maximum of 30 IPSec tunnels.
  • The VPN gateway type determines functional characteristics. The type of the Azure VPN gateway depends directly on the type of the VPN gateway used on premises, because they have to match. There are two types of VPN gateways:
    • Policy-based (formerly known as static).
    • Route-based (formerly known as dynamic).

Note: You can increase or decrease the SKU of a VPN gateway on as needed basis. However, you cannot change the existing gateway type.

Note: The effective throughput of VPN connections might vary, depending on the bandwidth of the Internet connection and impact of encryption associated with the VPN functionality.

Policy-based VPN devices operate according to local IPSec policies that you define. The policies determine whether to encrypt and direct traffic that reaches an IPSec tunnel interface based on the source and target IP address prefixes.

Route-based VPN devices rely on routes in the local routing table that you define to deliver traffic to a specific IPSec tunnel interface, which, at that point, performs encryption and forwards the encrypted network packets. In other words, in this case, any traffic reaching the interface is automatically encrypted and forwarded to the Azure VPN gateway on the other end of the tunnel.

Note: A site-to-site VPN does not support transitive routing between on-premises locations.

The choice of the device type has a number of significant implications:

  • Policy-based VPN devices support only a single site-to-site connection. With route-based VPN devices, that number depends on the Azure VPN gateway SKU, with up to 10 connections in case of the Basic and Standard SKUs and up to 30 connections in case of the High Performance SKU.
  • Policy-based VPN devices do not support point-to-site VPNs. This becomes important when you want to provide shared access to an Azure virtual network to clients connecting via a site-to-site VPN and a point-to-site VPN. Effectively, to implement this functionality, you would have to use a route-based VPN gateway in Azure, which implies the need to have the matching VPN device type on premises.
  • From the encryption standpoint, policy-based VPN devices support the Internet Key Exchange version 1 (IKEv1), AES256 (Advanced Encryption Standard), and AES128 3DES (Data Encryption Stanadrd) encryption algorithms, as well as the SHA1(SHA128) (Secure Hash Algorithm) hashing algorithm. Route-based VPN devices offer support for the IKEv2 and AES256 3DES encryption algorithm (during IKE Phase 1 setup) as well as both the SHA1(SHA128) and the SHA2(SHA256) hashing algorithms (again, during IKE Phase 1 setup). In addition, they also support perfect forward secrecy (DH Group1, 2, 5, 14, and 24).

Specifics of on-premises site-to-site VPN configuration are device specific. Microsoft offers configuration instructions for each of the validated VPN devices. Non-validated VPN devices may support site-to-site VPN, but they require independent testing.

For a list of VPN devices that Microsoft has validated in partnership with their vendors, and their configuration instructions, refer to VPN devices for Site-to-Site VPN Gateway connections

There are additional considerations regarding your on-premises infrastructure. In particular, if your VPN gateway resides on the perimeter network behind a firewall, you must ensure that the following types of traffic are allowed to pass through for both the inbound and outbound directions:

  • IP protocol 50
  • UDP port 500
  • UDP port 4500

The cost of site-to-site VPNs is comprised of two main components. The easiest-to-estimate part is the hourly cost of virtual machines hosting the VPN gateway. This depends on its SKU. There are three pricing tiers: Basic, Standard, or High Performance. In addition, there is a charge for outbound data transfers at standard data transfer rates, which depend on the volume of data and the zone in which Azure datacenter hosting the VPN gateway resides. The first 5 gigabytes (GB) per month are free of charge. There is also no cost associated with inbound data transfers.

There is a 99.9 percent availability Service Level Agreement (SLA) for each VPN gateway. A number of third-party vendors of VPN gateway devices support redundant configurations, which increase the resiliency of the on-premises endpoint of the VPN tunnel.

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

Containers on Azure – Part 1

In the last decade, hardware virtualization has drastically changed the IT landscape. One of many consequences of this trend is the emergence of cloud computing. However, a more recent virtualization approach promises to bring even more significant changes to the way you develop, deploy, and manage compute workloads. This approach is based on the concept of containers.

These series of posts explain containers and the ways you can implement them in Azure and in your on-premises datacenter.

When this concept was introduced on the Microsoft world, it was somehow difficult to me, to understand the all concept and how I will use it. So, my propose here is to easy the path and explain how containers works and the ways you can implement them in Azure and in your on-premises datacenter. The goal is to facilitate the path to deploy clusters of containerized workloads by using Azure Container Services (ACS).

Azure Service Fabric offers an innovative way to design and provision applications by dividing them into small, independently operating components called microservices. Containers and microservices are complementary technologies. By combining them, you can further increase the density of your workloads, optimize resource usage, and minimize cost.

What is Containers?

On a very simplistic way, containers are the next stage in virtualizing computing resources. Hardware virtualization freed people to a large extent from the constraints imposed by physical hardware. It enabled running multiple isolated instances of operating systems concurrently on the same physical hardware. Container-based virtualization virtualizes the operating system, allowing you to run multiple applications within the same operating system instance while maintaining isolation. Containers within a virtual machine provide functionality similar to that of virtual machines on a physical server. To better understand this analogy, this topic compares virtual machines with containers.

The following table lists the high-level differences between virtual machines and containers.

Feature Virtual machines Containers
Isolation mechanism Built in to the hypervisor Relies on operating system support.
Required amount of memory Includes operating system and app requirements Includes containerized apps requirements only.
Startup time Includes operating system boot, start of services, apps, and app dependencies Includes only start of apps and app dependencies. The operating system is already running.
Portability Portable, but the image is larger because it includes the operating system More portable, because the image includes only apps and their dependencies.
Image automation Depends on the operating system and apps Based on the Docker registry (for Docker images).

To better understand the difference between Virtual Machines and Containers, I highly suggest reading the following article, refer to Virtual Machines and Containers in Azure

Compared with virtual machines, containers offer several benefits, including:

  • Increased speed with which you can develop and share application code.
  • An improved testing lifecycle for applications.
  • An improved deployment process for applications.
  • The increased density of your workloads, resulting in improved resource utilization.

The most popular containerization technology is available from Docker. Docker uses Linux built-in support for containers. Windows Server 2016 includes a container feature that deliver equivalent functionality in the Windows Server operating system.

Azure Container Service

ACS allows you to administer clusters of multiple Docker hosts running containerized apps. ACS manages the provisioning of cloud infrastructure components, including Azure virtual machines and virtual machine scale sets, Azure storage, virtual networks, and load balancers. Additionally, it provides the management and scaling of containerized apps to tens of thousands of containers via integration with the following two orchestration engines:

  • The Mesosphere Datacenter Operating System (DC/OS). A distributed operating system provided by the Apache Software Foundation.
  • Docker Swarm. Clustering software provided by Docker.

Based on this integration, you can manage ACS clusters on the DC/OS or the Docker Swarm platform by relying on the same tools you use to manage your existing containerized workflows.

You can provision an ACS cluster directly from the Azure portal. Alternatively, you can use the Azure Resource Manager template or Azure command-line interface. During provisioning, you choose either DC/OS or Docker Swarm as the framework configuration. Subsequent configuration and management specifics depend mainly on this choice. Although both orchestration engines fully support Docker-formatted containers and Linux-based container isolation, they have architectural and functional differences, including:

  • DC/OS contains a Master availability set, public agent virtual machine scale set, and private agent virtual machine scale set, with fault-tolerant master/subordinate instances replicated by using Apache ZooKeeper. Docker Swarm contains a Master availability set and the agent virtual machine scale set.
  • DC/OS includes by default the Marathon orchestration platform, which manages the cluster-wide scheduling of containerized workloads. It supports multiple-resource scheduling that takes memory, CPU, disks, and ports into consideration.
  • With Docker Swarm, you can use the Docker command-line interface or the standard Docker application programming interface (API). DC/OS offers the REST API for interacting with its orchestration platform.

Azure Service Fabric

Azure Service Fabric is a cloud-based platform for developing, provisioning, and managing distributed, highly scalable, and highly available services and applications. Its capabilities result from dividing the functionality provided by these services and applications into individual components called microservices. Common examples of such microservices include the shopping carts or user profiles of commercial websites and the queues, gateways, and caches that provide infrastructure services. Multiple instances of these microservices run concurrently on a cluster of Azure virtual machines.

This approach might sound similar to building multitier applications by using Azure Cloud Services, which allows you to independently scale web and worker tiers. However, Azure Service Fabric operates on a much more granular level, as the term microservices suggests. This allows for much more efficient resource utilization while scaling to potentially thousands of virtual machines. Additionally, it allows developers to introduce gradual changes in the code of individual application components without having to upgrade the entire application.

Another feature that distinguishes Azure Service Fabric from traditional Platform as a Service (PaaS) services is support for both stateless and stateful components. Azure Cloud Services are stateless by design. To save state information, they have to rely on other Azure services, such as Azure Storage or Azure SQL Database. Azure Service Fabric, on the other hand, offers built-in support for maintaining state information. This minimizes or even eliminates the need for a back-end storage tier. It also decreases the latency when accessing application data.

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga