How to migrate Azure Managed Disk to different subscription and region – Part II

On the previous post I describe what need to do to move Azure Managed Disks between subscriptions and after that between regions (see link HERE). Although I didn’t explore the steps mention to move between regions.

Here are the step-by-steps of each stage, mention on the previous post, of how to move the Azure Managed Disk between Azure Regions:

1. Stop the virtual machine from migrating

1. Navigate to the Azure portal (
2. Click Virtual Machines on the left-hand menu, then select the virtual machine to be migrated
3. Click Stop

2. Enable the Export function to generate a single URL containing the VHD of the managed disk to migrate

1. On the selected virtual machine, click on Disks

2. Select the managed disk that you want to migrate

3. Click Export

4. Click on the Generate URL button

5. Copy this unique URL pointing to the target VHD file to the Notepad or any other clipboard.

3. Create a storage disk in the target region of the new subscription

1. So now, we need to create a storage account in the target region on the target subscription

2. Create a Container (folder) containing the VHD file to use and retrieve the connection information to that storage account:

2.1. On the Azure portal, select All Services.
2.2. Scroll to storage, and then select storage accounts
2.3. On the Storage Accounts window, click Add
2.4. Select the subscription in which you want to create the storage account, as well as the target resource group
2.5. Enter the name for your storage account
2.6. Select the location of your storage account (target region)
2.7. Leave all other fields to their default value
2.8. Click Verify + Create to review your storage account settings and create the account.

3. After the storage account is created, click Blob on the container

4. On the Name field, type VHDs

5. Click the OK button to confirm. This folder will contain the VHD file copied to the new region.

6. Click VHDs and then select Properties

7. Copy the URL to the Notepad (example:

8. Click access keys and copy the Key1 key to the Notepad.

4. Use the AzCopy command from an Azure virtual machine (this option will decrease the copying time) to copy the data (VHD file) from the managed disk to a storage account created in the target region of the new subscription

AzCopy is a command-line utility designed to copy data from/to an Azure Blob, to a file or to a Table using simple commands with optimum performance. You can copy data between a file system and a storage account, or between storage accounts.

To start, you need to download and then install the latest version of the AzCopy utility from the following link on a Windows virtual machine hosted in Azure. This was the way that I found to speed up the process of transferring the data between managed disks.

So, let start with the steps to copy the data:

1. Run a command prompt from the Azure virtual machine

2. Select the C:\Program Files (x86) \microsoft SDKs\Azure\AzCopy Directory (the utility does not modify the path).

3. Type the command to copy the data from the source managed disk to the target storage account previously created in a target subscription:
        azcopy /source:”<URL_FROM_VHD>” /dest:”<URL_VHD_DESTINATION>” /destkey:”<KEY1>”

5. From the VHD in the storage account, create a managed disk

After the copy is completed, we need to create a managed disk from the VHD copied.

1. Click All services

2. Select disks and Add

3. Enter the name of the managed disk to be created from the VHD file copied to the Storage account

4. Select the target subscription, the target resource group, and especially the target region

5. From the source Type drop-down menu, select Storage Blob, and then click the Browse button

6. Select the target storage account

7. Select the VHDs container

8. Select the copied VHD file and then validate by clicking the Select button

9. Finally enter the size in GiB and then click the Create button.

6. From the previously managed disk created, recreate a virtual machine

Now that the managed disk is on a different region and subscription, the final step is to create a virtual machine from the managed disk.

1. On the Azure Portal, navigate to Disks

2. Click on the managed disk that was created

3. Create a virtual machine

4. Follow the steps to create the virtual machine.

With these 6 steps, we copied a managed disk from an Azure virtual machine from region A to region B of another subscription, and then recreated the virtual machine.

Finally, be aware that it is possible to enable migration of managed disks between 2 subscriptions from the Azure portal GUI, but this method does not allow you to change the region of the virtual machine (thus the managed disk) and is only valid if the source virtual machine is not backed up.

To enable this feature, run the following 2 commands from a PowerShell Azure window and the source subscription:

Register-AzureRmProviderFeature -FeatureName ManagedResourcesMove -ProviderNamespace Microsoft.Compute

Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Compute

Cheers,Marcos Nogueira
Azure MVP
Twitter: @mdnoga

Containers on Azure – Part 2

On the previous post (see here), I talked about the concept of Containers, Azure Container Service and Azure Service Fabric. Now that you know the concept and have an idea how to implement it, let see how can you deploy containers in Azure.

Azure offers several ways to provision Azure virtual machines that support Docker containers:

  • Install the Docker virtual machine extension. You can either add the extension to an existing Azure virtual machine running a Docker-supported distribution of Linux or include it when deploying a new Azure virtual machine via a Resource Manager template or a command-line script. The extension installs the Docker daemon, also called the Docker Engine; the Docker client; and Docker Compose. The Docker daemon is necessary for an Azure virtual machine to function as a Docker host.
    Note: Docker Engine is a lightweight software component that runs as a daemon on a Linux operating system. It provides the environment for running containerized apps.
    The Docker client is management software that allows you to interact with the Docker Engine via a command line, which allows you to create, run, and transfer containers.
    Docker Compose is a utility for building and running Docker apps that consist of multiple containers.
  • Provision a Docker Azure virtual machine available from the Azure Marketplace. Use the Azure portal to deploy a Linux virtual machine to run Docker containerized workloads. During the virtual machine deployment, Azure automatically provisions the Docker virtual machine extension, which installs all the necessary Docker components.
  • Deploy an ACS cluster. This allows you to provision and manage multiple instances of Docker containers residing on clustered Docker hosts.
  • Use the Docker Machine driver to deploy an Azure virtual machine with support for Docker containers. Docker Machine is a command-line utility that allows you to perform several Docker-related administrative tasks, including provisioning new Docker hosts. The utility includes support for deploying Docker hosts on-premises and on Azure virtual machines. You must include the ‑driver azure parameter when running the docker-machine create For example, the following command deploys a new Azure virtual machine named dockerazurevm1 in an Azure subscription. You specify the subscription ID, create an administrative user account named mrdocker, and enable connectivity on TCP port 80.

docker-machine create -d azure \
  –azure-ssh-user mrdocker \
  –azure-subscription-id your_Azure_subscription_ID \
  –azure-open-port 80 \

With the default settings, the virtual machine has the size Standard_A2 and resides on an Azure virtual network named docker-machine in a docker-machine resource group in the West US region. A default network security group associated with the virtual machine network interface allows inbound connectivity on TCP port 22 for Secure Shell connections and on TCP port 2376 for remote connections from the Docker client. The command also generates self-signed certificates that help to secure subsequent communication from the computer where you ran Docker Machine and store the corresponding private key in your user’s account profile.

For the full syntax of the docker-machine create -d azure command, refer to Microsoft Azure.

Docker Machine is available on Windows, Linux, and Mac OS X operating systems. For installation instructions and links to download locations, refer to Install Docker Machine.

Containers on a Azure Virtual Machine

Choosing the most convenient and efficient way to run containers in your environment depends on the location of Docker hosts. Docker Machine allows for managing an on-premises and Azure-based Docker host in a consistent manner.

During Azure virtual machine provisioning, Docker Machine generates the self-signed certificate that you can use to establish a Secure Shell session to the Docker host. It also stores the certificate’s private key in your user account profile. This allows you to continue managing the Azure virtual machine from the same computer on which you initiated the virtual machine provisioning. To simplify management, you should also configure environment variables within your Windows command shell. To identify the environment variables to configure, run the following at the command prompt, where dockerazurevm1 is the name of the Azure virtual machine you deployed by running the docker-machine create command.

docker-machine env dockerazurevm1

This should return output similar to the following.

SET DOCKER_CERT_PATH=”C:\Users\Admin\.docker\dockerazurevm1\certs”
SET DOCKER_MACHINE_NAME=”dockerazurevm1”
@FOR /f “tokens=*” %i IN (‘docker-machine env dockervm1) DO @%i

You can now start a container on the Azure virtual machine by running the following command.

docker run -d -p 80:80 –restart=always container_name

This automatically locates the container named container_name, publishes it on port 80, initiates its execution in the detached mode, and ensures that the container always restarts after it terminates, regardless of the exit status. With the detached mode, the console session is not attached to the container process, so you can use it to continue managing the Docker host. In the attached mode, the console session displays the standard input, output, and error from the Docker container.

For the full syntax of the docker run command, refer to Docker Run.

The docker run command attempts to locate the latest version of the container locally on the Docker host. By default, it checks the version against the Docker Hub. This is a central, Docker-managed repository of Docker images available publicly via the Internet. If there is no locally cached container image or its version is out-of-date, the Docker daemon automatically downloads the latest version from the Docker Hub.

You can set up a private Docker Registry to maintain your own collection of container images. A private Docker Registry runs as a container based on the registry image available from the Docker Hub. You can store your private images in an Azure storage account.

To set up a private Docker Registry, follow this procedure:

  1. Create an Azure storage account.
  2. Start a registry container on a Docker host by running the following command.
    docker run -d -p 5000:5000 -e REGISTRY_STORAGE=azure -e REGISTRY_STORAGE_AZURE_ACCOUNTNAME=”storage-account_name” -e REGISTRY_STORAGE_AZURE_ACCOUNTKEY=”storage-account-key” -e REGISTRY_STORAGE_AZURE_CONTAINER=”registry”
    –name=registry registry:2

    In the preceding command, storage_account_name and storage_account_key represent the name and one of the two keys of the Azure storage account you created in the previous step. This provisions a new registry container and makes it accessible via TCP port 5000.
    Note: To allow inbound connections on the port that you specified when executing the docker run command, be sure to update the network security group associated with the Docker Azure VM network interface where the registry container is running.

  3. Build a new Docker image by running the docker build command, or pull an existing image from Docker Hub by running the docker pull command.
  4. Use the following docker tag command to associate the image you created or downloaded in the previous step with the private registry.
    docker tag hello-world localhost:5000/image_name

    In the preceding command, image_name represents the name of the image. This tags the image and designates a new repository in your private registry.
  5. To upload the newly tagged image to the private Docker Registry, run the following command.
    docker push localhost:5000/image_name

    This pushes the image into the private registry.

  6. To download the image from the private registry, run the following command.
    docker pull localhost:5000/image_name

    If you run the docker run command, the Docker daemon uses the newly downloaded image from your private registry.



Marcos Nogueira
Twitter: @mdnoga


Containers on Azure – Part 1

In the last decade, hardware virtualization has drastically changed the IT landscape. One of many consequences of this trend is the emergence of cloud computing. However, a more recent virtualization approach promises to bring even more significant changes to the way you develop, deploy, and manage compute workloads. This approach is based on the concept of containers.

These series of posts explain containers and the ways you can implement them in Azure and in your on-premises datacenter.

When this concept was introduced on the Microsoft world, it was somehow difficult to me, to understand the all concept and how I will use it. So, my propose here is to easy the path and explain how containers works and the ways you can implement them in Azure and in your on-premises datacenter. The goal is to facilitate the path to deploy clusters of containerized workloads by using Azure Container Services (ACS).

Azure Service Fabric offers an innovative way to design and provision applications by dividing them into small, independently operating components called microservices. Containers and microservices are complementary technologies. By combining them, you can further increase the density of your workloads, optimize resource usage, and minimize cost.

What is Containers?

On a very simplistic way, containers are the next stage in virtualizing computing resources. Hardware virtualization freed people to a large extent from the constraints imposed by physical hardware. It enabled running multiple isolated instances of operating systems concurrently on the same physical hardware. Container-based virtualization virtualizes the operating system, allowing you to run multiple applications within the same operating system instance while maintaining isolation. Containers within a virtual machine provide functionality similar to that of virtual machines on a physical server. To better understand this analogy, this topic compares virtual machines with containers.

The following table lists the high-level differences between virtual machines and containers.

Feature Virtual machines Containers
Isolation mechanism Built in to the hypervisor Relies on operating system support.
Required amount of memory Includes operating system and app requirements Includes containerized apps requirements only.
Startup time Includes operating system boot, start of services, apps, and app dependencies Includes only start of apps and app dependencies. The operating system is already running.
Portability Portable, but the image is larger because it includes the operating system More portable, because the image includes only apps and their dependencies.
Image automation Depends on the operating system and apps Based on the Docker registry (for Docker images).

To better understand the difference between Virtual Machines and Containers, I highly suggest reading the following article, refer to Virtual Machines and Containers in Azure

Compared with virtual machines, containers offer several benefits, including:

  • Increased speed with which you can develop and share application code.
  • An improved testing lifecycle for applications.
  • An improved deployment process for applications.
  • The increased density of your workloads, resulting in improved resource utilization.

The most popular containerization technology is available from Docker. Docker uses Linux built-in support for containers. Windows Server 2016 includes a container feature that deliver equivalent functionality in the Windows Server operating system.

Azure Container Service

ACS allows you to administer clusters of multiple Docker hosts running containerized apps. ACS manages the provisioning of cloud infrastructure components, including Azure virtual machines and virtual machine scale sets, Azure storage, virtual networks, and load balancers. Additionally, it provides the management and scaling of containerized apps to tens of thousands of containers via integration with the following two orchestration engines:

  • The Mesosphere Datacenter Operating System (DC/OS). A distributed operating system provided by the Apache Software Foundation.
  • Docker Swarm. Clustering software provided by Docker.

Based on this integration, you can manage ACS clusters on the DC/OS or the Docker Swarm platform by relying on the same tools you use to manage your existing containerized workflows.

You can provision an ACS cluster directly from the Azure portal. Alternatively, you can use the Azure Resource Manager template or Azure command-line interface. During provisioning, you choose either DC/OS or Docker Swarm as the framework configuration. Subsequent configuration and management specifics depend mainly on this choice. Although both orchestration engines fully support Docker-formatted containers and Linux-based container isolation, they have architectural and functional differences, including:

  • DC/OS contains a Master availability set, public agent virtual machine scale set, and private agent virtual machine scale set, with fault-tolerant master/subordinate instances replicated by using Apache ZooKeeper. Docker Swarm contains a Master availability set and the agent virtual machine scale set.
  • DC/OS includes by default the Marathon orchestration platform, which manages the cluster-wide scheduling of containerized workloads. It supports multiple-resource scheduling that takes memory, CPU, disks, and ports into consideration.
  • With Docker Swarm, you can use the Docker command-line interface or the standard Docker application programming interface (API). DC/OS offers the REST API for interacting with its orchestration platform.

Azure Service Fabric

Azure Service Fabric is a cloud-based platform for developing, provisioning, and managing distributed, highly scalable, and highly available services and applications. Its capabilities result from dividing the functionality provided by these services and applications into individual components called microservices. Common examples of such microservices include the shopping carts or user profiles of commercial websites and the queues, gateways, and caches that provide infrastructure services. Multiple instances of these microservices run concurrently on a cluster of Azure virtual machines.

This approach might sound similar to building multitier applications by using Azure Cloud Services, which allows you to independently scale web and worker tiers. However, Azure Service Fabric operates on a much more granular level, as the term microservices suggests. This allows for much more efficient resource utilization while scaling to potentially thousands of virtual machines. Additionally, it allows developers to introduce gradual changes in the code of individual application components without having to upgrade the entire application.

Another feature that distinguishes Azure Service Fabric from traditional Platform as a Service (PaaS) services is support for both stateless and stateful components. Azure Cloud Services are stateless by design. To save state information, they have to rely on other Azure services, such as Azure Storage or Azure SQL Database. Azure Service Fabric, on the other hand, offers built-in support for maintaining state information. This minimizes or even eliminates the need for a back-end storage tier. It also decreases the latency when accessing application data.


Marcos Nogueira
Twitter: @mdnoga

Azure Cloud Shell Everywhere

WOW! Microsoft did it again! If you didn’t saw the announcements at the Microsoft Build 2017 Conference, you should!

They announce a lot of new things, but regarding Azure one of may favorite is the application for IOS specific!

I see this App, as a savior in so many ways! Those times that you have a problem and you need immediately access to run a PowerShell script or that an action like restart a virtual machine or even something else, now you have the right tools at your reach! Be able to access Azure subscription through my mobile phone is amazing!

From what I was able to see and play with is basically your azure portal but on a IOS App! I’ve been setting up and playing on Azure with this app and it’s is really good.

Here are some of the screenshots from the Azure IOS App.


As you have on the Azure Portal, you can select what you want to see, by choosing your favorite. This is the shortcut view of what is the most interesting to you.

When you select the resource the information and the tasks that you can do are really like what is available on the Azure Portal.

Even the notification that you have in the Azure Portal is available here. So you can track what is going on your Azure tenant.

When you select, the Virtual Machine is really good to see a lot of information. Usually on Apps for smartphones you have a subset of capabilities of the main too (in this case the Azure Portal), but with this Apps you can do >80% of your daily basic tasks through this App.

And as you expected, Azure Cloud Shell will be available on the Azure App as well. Although it’s not available yet. At least for me (see picture bellow).

But this is probably the serious reason why I will use this app a lot. I store a lot of my scripts in my OneNote and be able to copy/paste into this PowerShell window and setup a S2S vpn, for example, while you are waiting for your food arrived is something that every “Azure Geek”/”Azure Centric Nerd” dream of, Right? The possibilities are unlimited!




Marcos Nogueira
Twitter: @mdnoga


Differences between the classic and the Azure Resource Manager deployment model

A lot of times I got these questions, where is should create my resources? On the classic portal or on the ARM portal? What is the difference? Is it only the URL? Why if I choose to create a VM on classic within the same region I cannot use my network created on the ARM portal or vice-versa?

All these questions are valid, but someone of them are misconception of the way that Azure works or has been setup. On older post I cover the difference on creating virtual machines between the ASM Portal (Classic) and the ARM Portal (see here), however there is a lot more than virtual machines in Azure.

Although most of the general networking principles outlined in the previous post (see here) apply to Azure virtual networks regardless of their deployment model, there are some differences between Azure Resource Manager and classic virtual networks.

In the classic deployment model, network characteristics of virtual machines are determined by:

  • A mandatory cloud service that serves as a logical container for virtual machines.
  • An optional virtual network that allows you to implement direct connectivity among virtual machines in different cloud services and to on-premises networks. In particular, cloud services support virtual network placement but do not enforce it. As a result, you have the option of deploying a cloud service without creating a new virtual network or without using an existing one.

In the Azure Resource Manager model, network characteristics of virtual machines include that:

  • There is no support for cloud services. To deliver the equivalent functionality, the Azure Resource Manager model provides a number of additional resource types. In particular, to implement load balancing and NAT, you can implement an Azure load balancer. To allow connectivity to a virtual machine, you must create and attach it to a virtual network adapter. While this increases to some extent the complexity of the provisioning resources, it offers significant performance and flexibility benefits. In particular, you can deploy your solutions much faster than when using the classic deployment model.
  • A virtual machine must reside within a virtual network. A virtual machine attaches to a virtual network by using one or more virtual network interface cards.

Note: A load balancer constitutes a separate Azure Resource Manager resource, while in the classic deployment model it is part of the cloud service in which load-balanced virtual machines reside. Similarly, a network interface is an inherent part of a classic virtual machine, but Azure Resource Manager allows you to manage it separately, including detaching it from one virtual machine and attaching it to another. The same logic applies to a public IP address. In particular, every cloud service has at least one automatically assigned public IP address. However, public IP address assignment is optional with Azure Resource Manager. Due to the lack of support for cloud services in the Azure Resource Manager deployment model, you have the choice of associating it with either Azure load balancers or network adapters.

The following table summarizes the primary differences between the classic deployment model and the Azure Resource Manager model from the networking standpoint.

Item Azure Service Management Azure Resource Manager
Azure Cloud Services for virtual machines The cloud service is a mandatory container for virtual machines and associated objects. The cloud service does not exist.
Load balancing The cloud service functions as a load balancer for infrastructure as a service (IaaS) resources within Azure Cloud Services. The load balancer is an independent resource. You can associate a network adapter that is attached to a virtual machine with a load balancer.
Virtual IP address (VIP) The platform automatically assigns a VIP to a cloud service upon its creation. You use this IP address to allow connectivity to virtual machines within the cloud service from Internet or from Azure-resident services. You have the option of assigning a public IP to a network adapter or a load balancer.
Reserved IP address You can reserve an IP address in Azure and then associate it to a cloud service to ensure that its VIP remains constant. Static mode public IP addresses provide the same capability as reserved IP addresses.
Public IP address per virtual machine You can assign public IP addresses to a virtual machine directly. You can assign public IP addresses to a network interface attached to a virtual machine.
Endpoints You can allow external connections to virtual machines by configuring endpoints of the cloud service. You can access a virtual machine by using its public IP address. Alternatively, you can provide access to a virtual machine on a specific port by configuring inbound NAT rules on a load balancer associated with the network adapter attached to the virtual machine.
DNS name Every cloud service has a public DNS name in the namespace, such as:

The DNS name associated with a public IP address of a virtual machine or a load balancer is optional. The FQDN includes the Azure region where the load balancer and the virtual machine reside, such as:

Network interfaces You define the primary and secondary network interfaces within the configuration of a virtual machine. The network interface is an independent resource that is persistent in the Azure environment. You can attach it to, and detach it from, virtual machines without losing its identity and configuration state. Its lifecycle does not have to depend on the lifecycle of a virtual machine.



Marcos Nogueira
Twitter: @mdnoga