Containers on Azure – Part 2

On the previous post (see here), I talked about the concept of Containers, Azure Container Service and Azure Service Fabric. Now that you know the concept and have an idea how to implement it, let see how can you deploy containers in Azure.

Azure offers several ways to provision Azure virtual machines that support Docker containers:

  • Install the Docker virtual machine extension. You can either add the extension to an existing Azure virtual machine running a Docker-supported distribution of Linux or include it when deploying a new Azure virtual machine via a Resource Manager template or a command-line script. The extension installs the Docker daemon, also called the Docker Engine; the Docker client; and Docker Compose. The Docker daemon is necessary for an Azure virtual machine to function as a Docker host.
    Note: Docker Engine is a lightweight software component that runs as a daemon on a Linux operating system. It provides the environment for running containerized apps.
    The Docker client is management software that allows you to interact with the Docker Engine via a command line, which allows you to create, run, and transfer containers.
    Docker Compose is a utility for building and running Docker apps that consist of multiple containers.
  • Provision a Docker Azure virtual machine available from the Azure Marketplace. Use the Azure portal to deploy a Linux virtual machine to run Docker containerized workloads. During the virtual machine deployment, Azure automatically provisions the Docker virtual machine extension, which installs all the necessary Docker components.
  • Deploy an ACS cluster. This allows you to provision and manage multiple instances of Docker containers residing on clustered Docker hosts.
  • Use the Docker Machine driver to deploy an Azure virtual machine with support for Docker containers. Docker Machine is a command-line utility that allows you to perform several Docker-related administrative tasks, including provisioning new Docker hosts. The utility includes support for deploying Docker hosts on-premises and on Azure virtual machines. You must include the ‑driver azure parameter when running the docker-machine create For example, the following command deploys a new Azure virtual machine named dockerazurevm1 in an Azure subscription. You specify the subscription ID, create an administrative user account named mrdocker, and enable connectivity on TCP port 80.

docker-machine create -d azure \
  –azure-ssh-user mrdocker \
  –azure-subscription-id your_Azure_subscription_ID \
  –azure-open-port 80 \
  dockerazurevm1

With the default settings, the virtual machine has the size Standard_A2 and resides on an Azure virtual network named docker-machine in a docker-machine resource group in the West US region. A default network security group associated with the virtual machine network interface allows inbound connectivity on TCP port 22 for Secure Shell connections and on TCP port 2376 for remote connections from the Docker client. The command also generates self-signed certificates that help to secure subsequent communication from the computer where you ran Docker Machine and store the corresponding private key in your user’s account profile.

For the full syntax of the docker-machine create -d azure command, refer to Microsoft Azure.

Docker Machine is available on Windows, Linux, and Mac OS X operating systems. For installation instructions and links to download locations, refer to Install Docker Machine.

Containers on a Azure Virtual Machine

Choosing the most convenient and efficient way to run containers in your environment depends on the location of Docker hosts. Docker Machine allows for managing an on-premises and Azure-based Docker host in a consistent manner.

During Azure virtual machine provisioning, Docker Machine generates the self-signed certificate that you can use to establish a Secure Shell session to the Docker host. It also stores the certificate’s private key in your user account profile. This allows you to continue managing the Azure virtual machine from the same computer on which you initiated the virtual machine provisioning. To simplify management, you should also configure environment variables within your Windows command shell. To identify the environment variables to configure, run the following at the command prompt, where dockerazurevm1 is the name of the Azure virtual machine you deployed by running the docker-machine create command.

docker-machine env dockerazurevm1

This should return output similar to the following.

SET DOCKER_TLS_VERIFY=”1”
SET DOCKER_HOST=”tcp://191.237.46.90:2376”
SET DOCKER_CERT_PATH=”C:\Users\Admin\.docker\dockerazurevm1\certs”
SET DOCKER_MACHINE_NAME=”dockerazurevm1”
@FOR /f “tokens=*” %i IN (‘docker-machine env dockervm1) DO @%i

You can now start a container on the Azure virtual machine by running the following command.

docker run -d -p 80:80 –restart=always container_name

This automatically locates the container named container_name, publishes it on port 80, initiates its execution in the detached mode, and ensures that the container always restarts after it terminates, regardless of the exit status. With the detached mode, the console session is not attached to the container process, so you can use it to continue managing the Docker host. In the attached mode, the console session displays the standard input, output, and error from the Docker container.

For the full syntax of the docker run command, refer to Docker Run.

The docker run command attempts to locate the latest version of the container locally on the Docker host. By default, it checks the version against the Docker Hub. This is a central, Docker-managed repository of Docker images available publicly via the Internet. If there is no locally cached container image or its version is out-of-date, the Docker daemon automatically downloads the latest version from the Docker Hub.

You can set up a private Docker Registry to maintain your own collection of container images. A private Docker Registry runs as a container based on the registry image available from the Docker Hub. You can store your private images in an Azure storage account.

To set up a private Docker Registry, follow this procedure:

  1. Create an Azure storage account.
  2. Start a registry container on a Docker host by running the following command.
    docker run -d -p 5000:5000 -e REGISTRY_STORAGE=azure -e REGISTRY_STORAGE_AZURE_ACCOUNTNAME=”storage-account_name” -e REGISTRY_STORAGE_AZURE_ACCOUNTKEY=”storage-account-key” -e REGISTRY_STORAGE_AZURE_CONTAINER=”registry”
    –name=registry registry:2

    In the preceding command, storage_account_name and storage_account_key represent the name and one of the two keys of the Azure storage account you created in the previous step. This provisions a new registry container and makes it accessible via TCP port 5000.
    Note: To allow inbound connections on the port that you specified when executing the docker run command, be sure to update the network security group associated with the Docker Azure VM network interface where the registry container is running.

  3. Build a new Docker image by running the docker build command, or pull an existing image from Docker Hub by running the docker pull command.
  4. Use the following docker tag command to associate the image you created or downloaded in the previous step with the private registry.
    docker tag hello-world localhost:5000/image_name

    In the preceding command, image_name represents the name of the image. This tags the image and designates a new repository in your private registry.
  5. To upload the newly tagged image to the private Docker Registry, run the following command.
    docker push localhost:5000/image_name

    This pushes the image into the private registry.

  6. To download the image from the private registry, run the following command.
    docker pull localhost:5000/image_name

    If you run the docker run command, the Docker daemon uses the newly downloaded image from your private registry.

 

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

 

Containers on Azure – Part 1

In the last decade, hardware virtualization has drastically changed the IT landscape. One of many consequences of this trend is the emergence of cloud computing. However, a more recent virtualization approach promises to bring even more significant changes to the way you develop, deploy, and manage compute workloads. This approach is based on the concept of containers.

These series of posts explain containers and the ways you can implement them in Azure and in your on-premises datacenter.

When this concept was introduced on the Microsoft world, it was somehow difficult to me, to understand the all concept and how I will use it. So, my propose here is to easy the path and explain how containers works and the ways you can implement them in Azure and in your on-premises datacenter. The goal is to facilitate the path to deploy clusters of containerized workloads by using Azure Container Services (ACS).

Azure Service Fabric offers an innovative way to design and provision applications by dividing them into small, independently operating components called microservices. Containers and microservices are complementary technologies. By combining them, you can further increase the density of your workloads, optimize resource usage, and minimize cost.

What is Containers?

On a very simplistic way, containers are the next stage in virtualizing computing resources. Hardware virtualization freed people to a large extent from the constraints imposed by physical hardware. It enabled running multiple isolated instances of operating systems concurrently on the same physical hardware. Container-based virtualization virtualizes the operating system, allowing you to run multiple applications within the same operating system instance while maintaining isolation. Containers within a virtual machine provide functionality similar to that of virtual machines on a physical server. To better understand this analogy, this topic compares virtual machines with containers.

The following table lists the high-level differences between virtual machines and containers.

Feature Virtual machines Containers
Isolation mechanism Built in to the hypervisor Relies on operating system support.
Required amount of memory Includes operating system and app requirements Includes containerized apps requirements only.
Startup time Includes operating system boot, start of services, apps, and app dependencies Includes only start of apps and app dependencies. The operating system is already running.
Portability Portable, but the image is larger because it includes the operating system More portable, because the image includes only apps and their dependencies.
Image automation Depends on the operating system and apps Based on the Docker registry (for Docker images).

To better understand the difference between Virtual Machines and Containers, I highly suggest reading the following article, refer to Virtual Machines and Containers in Azure

Compared with virtual machines, containers offer several benefits, including:

  • Increased speed with which you can develop and share application code.
  • An improved testing lifecycle for applications.
  • An improved deployment process for applications.
  • The increased density of your workloads, resulting in improved resource utilization.

The most popular containerization technology is available from Docker. Docker uses Linux built-in support for containers. Windows Server 2016 includes a container feature that deliver equivalent functionality in the Windows Server operating system.

Azure Container Service

ACS allows you to administer clusters of multiple Docker hosts running containerized apps. ACS manages the provisioning of cloud infrastructure components, including Azure virtual machines and virtual machine scale sets, Azure storage, virtual networks, and load balancers. Additionally, it provides the management and scaling of containerized apps to tens of thousands of containers via integration with the following two orchestration engines:

  • The Mesosphere Datacenter Operating System (DC/OS). A distributed operating system provided by the Apache Software Foundation.
  • Docker Swarm. Clustering software provided by Docker.

Based on this integration, you can manage ACS clusters on the DC/OS or the Docker Swarm platform by relying on the same tools you use to manage your existing containerized workflows.

You can provision an ACS cluster directly from the Azure portal. Alternatively, you can use the Azure Resource Manager template or Azure command-line interface. During provisioning, you choose either DC/OS or Docker Swarm as the framework configuration. Subsequent configuration and management specifics depend mainly on this choice. Although both orchestration engines fully support Docker-formatted containers and Linux-based container isolation, they have architectural and functional differences, including:

  • DC/OS contains a Master availability set, public agent virtual machine scale set, and private agent virtual machine scale set, with fault-tolerant master/subordinate instances replicated by using Apache ZooKeeper. Docker Swarm contains a Master availability set and the agent virtual machine scale set.
  • DC/OS includes by default the Marathon orchestration platform, which manages the cluster-wide scheduling of containerized workloads. It supports multiple-resource scheduling that takes memory, CPU, disks, and ports into consideration.
  • With Docker Swarm, you can use the Docker command-line interface or the standard Docker application programming interface (API). DC/OS offers the REST API for interacting with its orchestration platform.

Azure Service Fabric

Azure Service Fabric is a cloud-based platform for developing, provisioning, and managing distributed, highly scalable, and highly available services and applications. Its capabilities result from dividing the functionality provided by these services and applications into individual components called microservices. Common examples of such microservices include the shopping carts or user profiles of commercial websites and the queues, gateways, and caches that provide infrastructure services. Multiple instances of these microservices run concurrently on a cluster of Azure virtual machines.

This approach might sound similar to building multitier applications by using Azure Cloud Services, which allows you to independently scale web and worker tiers. However, Azure Service Fabric operates on a much more granular level, as the term microservices suggests. This allows for much more efficient resource utilization while scaling to potentially thousands of virtual machines. Additionally, it allows developers to introduce gradual changes in the code of individual application components without having to upgrade the entire application.

Another feature that distinguishes Azure Service Fabric from traditional Platform as a Service (PaaS) services is support for both stateless and stateful components. Azure Cloud Services are stateless by design. To save state information, they have to rely on other Azure services, such as Azure Storage or Azure SQL Database. Azure Service Fabric, on the other hand, offers built-in support for maintaining state information. This minimizes or even eliminates the need for a back-end storage tier. It also decreases the latency when accessing application data.

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

Azure Cloud Shell Everywhere

WOW! Microsoft did it again! If you didn’t saw the announcements at the Microsoft Build 2017 Conference, you should!

They announce a lot of new things, but regarding Azure one of may favorite is the application for IOS specific!

I see this App, as a savior in so many ways! Those times that you have a problem and you need immediately access to run a PowerShell script or that an action like restart a virtual machine or even something else, now you have the right tools at your reach! Be able to access Azure subscription through my mobile phone is amazing!

From what I was able to see and play with is basically your azure portal but on a IOS App! I’ve been setting up and playing on Azure with this app and it’s is really good.

Here are some of the screenshots from the Azure IOS App.

 

As you have on the Azure Portal, you can select what you want to see, by choosing your favorite. This is the shortcut view of what is the most interesting to you.

When you select the resource the information and the tasks that you can do are really like what is available on the Azure Portal.

Even the notification that you have in the Azure Portal is available here. So you can track what is going on your Azure tenant.

When you select, the Virtual Machine is really good to see a lot of information. Usually on Apps for smartphones you have a subset of capabilities of the main too (in this case the Azure Portal), but with this Apps you can do >80% of your daily basic tasks through this App.

And as you expected, Azure Cloud Shell will be available on the Azure App as well. Although it’s not available yet. At least for me (see picture bellow).

But this is probably the serious reason why I will use this app a lot. I store a lot of my scripts in my OneNote and be able to copy/paste into this PowerShell window and setup a S2S vpn, for example, while you are waiting for your food arrived is something that every “Azure Geek”/”Azure Centric Nerd” dream of, Right? The possibilities are unlimited!

 

 

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

 

Differences between the classic and the Azure Resource Manager deployment model

A lot of times I got these questions, where is should create my resources? On the classic portal or on the ARM portal? What is the difference? Is it only the URL? Why if I choose to create a VM on classic within the same region I cannot use my network created on the ARM portal or vice-versa?

All these questions are valid, but someone of them are misconception of the way that Azure works or has been setup. On older post I cover the difference on creating virtual machines between the ASM Portal (Classic) and the ARM Portal (see here), however there is a lot more than virtual machines in Azure.

Although most of the general networking principles outlined in the previous post (see here) apply to Azure virtual networks regardless of their deployment model, there are some differences between Azure Resource Manager and classic virtual networks.

In the classic deployment model, network characteristics of virtual machines are determined by:

  • A mandatory cloud service that serves as a logical container for virtual machines.
  • An optional virtual network that allows you to implement direct connectivity among virtual machines in different cloud services and to on-premises networks. In particular, cloud services support virtual network placement but do not enforce it. As a result, you have the option of deploying a cloud service without creating a new virtual network or without using an existing one.

In the Azure Resource Manager model, network characteristics of virtual machines include that:

  • There is no support for cloud services. To deliver the equivalent functionality, the Azure Resource Manager model provides a number of additional resource types. In particular, to implement load balancing and NAT, you can implement an Azure load balancer. To allow connectivity to a virtual machine, you must create and attach it to a virtual network adapter. While this increases to some extent the complexity of the provisioning resources, it offers significant performance and flexibility benefits. In particular, you can deploy your solutions much faster than when using the classic deployment model.
  • A virtual machine must reside within a virtual network. A virtual machine attaches to a virtual network by using one or more virtual network interface cards.

Note: A load balancer constitutes a separate Azure Resource Manager resource, while in the classic deployment model it is part of the cloud service in which load-balanced virtual machines reside. Similarly, a network interface is an inherent part of a classic virtual machine, but Azure Resource Manager allows you to manage it separately, including detaching it from one virtual machine and attaching it to another. The same logic applies to a public IP address. In particular, every cloud service has at least one automatically assigned public IP address. However, public IP address assignment is optional with Azure Resource Manager. Due to the lack of support for cloud services in the Azure Resource Manager deployment model, you have the choice of associating it with either Azure load balancers or network adapters.

The following table summarizes the primary differences between the classic deployment model and the Azure Resource Manager model from the networking standpoint.

Item Azure Service Management Azure Resource Manager
Azure Cloud Services for virtual machines The cloud service is a mandatory container for virtual machines and associated objects. The cloud service does not exist.
Load balancing The cloud service functions as a load balancer for infrastructure as a service (IaaS) resources within Azure Cloud Services. The load balancer is an independent resource. You can associate a network adapter that is attached to a virtual machine with a load balancer.
Virtual IP address (VIP) The platform automatically assigns a VIP to a cloud service upon its creation. You use this IP address to allow connectivity to virtual machines within the cloud service from Internet or from Azure-resident services. You have the option of assigning a public IP to a network adapter or a load balancer.
Reserved IP address You can reserve an IP address in Azure and then associate it to a cloud service to ensure that its VIP remains constant. Static mode public IP addresses provide the same capability as reserved IP addresses.
Public IP address per virtual machine You can assign public IP addresses to a virtual machine directly. You can assign public IP addresses to a network interface attached to a virtual machine.
Endpoints You can allow external connections to virtual machines by configuring endpoints of the cloud service. You can access a virtual machine by using its public IP address. Alternatively, you can provide access to a virtual machine on a specific port by configuring inbound NAT rules on a load balancer associated with the network adapter attached to the virtual machine.
DNS name Every cloud service has a public DNS name in the cloudapp.net namespace, such as:

mdnogadev.cloudapp.net

The DNS name associated with a public IP address of a virtual machine or a load balancer is optional. The FQDN includes the Azure region where the load balancer and the virtual machine reside, such as:

mdnogavm1.westus.cloudapp.azure.com.

Network interfaces You define the primary and secondary network interfaces within the configuration of a virtual machine. The network interface is an independent resource that is persistent in the Azure environment. You can attach it to, and detach it from, virtual machines without losing its identity and configuration state. Its lifecycle does not have to depend on the lifecycle of a virtual machine.

 

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

The difference between Azure Virtual Machines and Azure Cloud Services?

Azure offers several compute hosting options for integrating on-premises workloads in Azure.  On a previous post (see here), I described the difference between the both deployment methods, Azure Resource Manager and Azure Service Manager (Classic). On this post, I will focus on the difference between Azure virtual machines and Azure Cloud Services, because they serve as the basis on integration solutions.

Azure virtual machines

Azure virtual machines provide the greatest degree of control over the virtual machine operating system. You can arbitrarily configure an Azure virtual machine and install almost any third-party software as long as you do not violate the restrictions. Every virtual machine has at least one fixed disk with up to 64 data disks, which persist content across restarts.

You can provision Azure virtual machines by using ether the classic or the Azure Resource Manager deployment model. When using the Azure Resource Deployment model, you must deploy Azure virtual machines into an Azure virtual network.

Because you have complete control over the virtual machine at the operating system level, you are responsible for maintaining the operating system. The responsibilities include installing software updates from the operating system vendor, performing backups, and implementing resiliency to provide a sufficient level of business continuity.

When using the classic deployment model to horizontally scale Azure virtual machines, you must pre-provision additional Azure virtual machines and keep them offline until you are ready to scale out. With the Azure Resource Manager deployment model, you have the option of using virtual machine scale sets for horizontal scaling.

Azure virtual machines are best suited for hosting:

  • Windows Server or Linux infrastructure servers, such as Active Directory domain controllers or Domain Name System (DNS) servers.
  • Highly customized app servers for which the setup involves a complex configuration.
  • Stateful workloads that require persistent storage, such as database servers.

Azure Cloud Services

Azure Cloud Services allows you to manage the virtual machine operating system. However, because Azure Cloud Services uses temporary storage, any change you directly apply does not persist across restarts. The virtual disks automatically provision whenever you start the service, based on the custom code and configuration files you provide. Moreover, you are not responsible for maintaining the operating system updates. Business continuity is part of the service, with the code and configuration automatically replicated across multiple locations.

Azure Cloud Services supports only the classic deployment model. You can deploy virtual disks in a virtual network, but you must provision such a network by using the classic deployment model.

Azure Cloud Services offers superior horizontal scaling capabilities when compared with Azure virtual machines. It can scale to thousands of instances, which the Azure platform automatically provisions based on criteria you define. In addition, it simplifies the development of solutions that consist of multiple tiers. In a typical implementation, a cloud service contains a web role and a worker role. The web role contains virtual machines that provide front-end functionality. The worker role manages the processing of background tasks. Both roles can scale independently of each other.

Azure cloud services are best suited for hosting:

  • Multitiered web apps.
  • Stateless apps that require a highly scalable, high-performance environment.

 

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga