Move VM between VNETs in Azure

This week I come into this scenario, I would like to move a virtual machine in azure between different VNETs. You might have different reasons to do it, but what is the best way to do it?

First you have to understand the scenario, is between VNETs on the same region, or between regions? Same subscription or different subscriptions? And at last same tenant or between different tenants?

The way that I look into to this is simple. I know that you have different ways to approach these scenarios, but I want to try to create a solution that no matter what you could use it.

Let’s work on the possibilities. What we know:

  • When you create a VM in Azure, you create several resources (compute, network and storage)
  • When you delete the VM in Azure, you only delete de compute (assuming that you click on the delete button and you didn’t delete the resource group). That means the VHD and the network adapter (and all their dependencies) will remain intact.

So we could use this “orphan” resources (objects) to create a new VM on the VNET that we want. Genius! 😊

In this case we could use the script that I publish to create the VM with the existing disk (see here). That is one option.

Although, if you are on the path of using ARM Template with JSON, you might want to double check if your JSON template reflects that as well (see here).

This is another way to solve your issue of moving a VM between VNETS.

Cheers,

Marcos Nogueira
Azure MVP

azurecentric.com
Twitter: @mdnoga

Deploy a ARM VM using an existing VHD in Azure

Another day, one of my costumer wants to rebuild a virtual machine from the existing VHD and place on the new Resource Group and on a different VLAN, but without transferring VHD. The idea was to park the VHD on a storage account to avoid transferring this huge VHD.

First, I want to clarify if you delete the VM, you are not deleting the all the resources, that means that the vhd(s), network adapter9s) or the network IPs will remain intact. You are only deleting the compute section of the VM. That means you can redeploy using the same configuration, or change the network, for example.

To achieve that though, you need to do it through PowerShell and/or using JSON files.

So, if you change the original JSON file just replacing the VHD you will probably get an error message saying, “Cannot attach an existing OS disk if the VM is created from a platform or user image.

To avoid that you have to change the JSON file to reflect createOption to use the attach method instead.

Here is what you need to change:

Original JSON:

“storageProfile”: {

“imageReference”: {

“publisher”: “MicrosoftWindowsServer”,

“offer”: “WindowsServer”,

“sku”: “[parameters(‘windowsOSVersion’)]”,

“version”: “latest” },

“osDisk”: { “createOption”: “FromImage” },

 

Replace with:

“storageProfile”: {

“osDisk”: { “createOption”: “attach”,

“managedDisk”: {

“id”: [Managed_Disk_ID] },

                }

}

 

Cheers,

Marcos Nogueira
Azure MVP
azurecentric.com
Twitter: @mdnoga

Managed disk on Azure

You probably already saw this when you are creating a Virtual Machine on Azure. After you insert the basic information like Name of the VM, choose the size, then comes the time to define and configure the settings of that VM. One of the first thing is the Use of managed disk.

But what is managed disks? How they work? What are the implication of using the Managed disks?

So, first thing, Managed Disk allow you to abstract the storage accounts where you will use on your virtual machine (see pictures below). When you select that you want to use managed disk, you don’t have to setup or choose the storage account where those disks will be stored.

When you don’t want to use Managed disks, you have to select the storage account.

With Managed disk, you only have to specify the size of the disk, and Azure manage for you. That allows you more granular access control. You don’t have to care with the storage account limits and you will gain higher scalability, meaning that you can create up to 10000 disks per region per subscription.

Managed disk will increase your resilience for your availability sets, by making sure that the disk will belong to a storage unit that is on a different fault domain. In my experience, when you create storage account, it’s not guarantee that your storage account will be on a different fault domain. That scenario, even if you use availability sets on the setup, doesn’t avoid a single point of failure.

But if you are thinking, that you prefer to use storage accounts, to control the access to the VHDs, with managed disks you can use RBAC as well, to assign the permissions for a managed disk to one or more users. In this scenario, you have to managed disk by disk, and not to the entire storage account. That means more granular access control. You can prevent, for example, a user of copy that vhd, but still use the virtual machine.

The integration with Azure Backup is great. You can use Azure Backup Service with managed disk to create a backup job that will easy your VM restoration. Managed disks although, only support the Locally Redundant Storage (LRS) as a replication option, this mean that 3 copies of the vhd within the region.

To resume, here are the benefits of managed disks:

  • Simple and scalable VM deployment
  • Better reliability for Availability Sets
  • Granular Access control
  • Azure Backup service support

Cheers,

Marcos Nogueira
Azure MVP

azurecentric.com
Twitter: @mdnoga

Azure Site Recovery Planning Considerations – Part 1

I have been writing quite a bit about how Azure Site Recovery can help you build a Disaster Recovery solution, help you migrating workloads in to Azure, help you migrate between VMware to Hyper-V and so many more others ways to use ASR. Although, I never cover what do you need to do and know, when you need to plan/design the ASR deployment/infrastructure to achieve your business requirements.

The first factor to consider when planning for ASR is whether the disaster recovery site will reside in an on-premises location or in Azure. In addition, you must also take into account the characteristics of your primary site, including:

  • The location. You should ensure that the secondary site is far enough from the primary site so that it will remain operational if there is a region-wide disaster affecting the availability of the primary site. On the other hand, the secondary site should be relatively close to the primary site to minimize the latency of replication traffic and connectivity from the primary site.
  • The existing virtualization platform. The architecture of the solution and its capabilities depend to some extent on whether you are using Hyper-V or vSphere and whether you rely on VMM or vCenter to manage virtualization hosts.
  • The virtual machines and workloads you intend to protect. Your secondary site should provide a sufficient amount of compute and storage resources to accommodate production workloads following the failover.

Capacity planning

As mention on the How to build a Disaster Recovery solution with Azure Site Recovery post, Microsoft offers the Site Recovery Capacity Planner, which is a Microsoft Excel macro-enabled workbook (see here). It assists with estimating capacity requirements for the deployment of a disaster recovery site in Azure. The Site Recovery Capacity Planner also helps with analyzing the existing workloads that you intend to protect and provides recommendations regarding the compute, storage, and network resources that you will require to implement their protection.

Modes

The workbook operates in two modes:

  • Quick Planner. This mode requires you to provide general statistics representing the current capacity and utilization of your production site. These statistics could include the total number of virtual machines, average number of disks per virtual machine, average size of a virtual machine disk, average disk utilization, total amount of data to be replicated, and average daily data change rate.
  • Detailed Planner. This mode requires you to provide capacity and utilization data for each virtual machine you intend to protect. This data could include the number of processors, memory allocation, number of network adapters, number of disks, total storage, disk utilization, and the operating system that is running in the virtual machine.

Note that you are responsible for collecting relevant data. The workbook simply handles the relevant calculations afterward. If you are using Hyper-V to host virtual machines, you can use the Microsoft Assessment and Planning (MAP) Toolkit for Hyper-V to determine the average daily data change rate. If you operate in a VMware environment, use the vSphere Replication Capacity Planning appliance instead.

ASR Planning Considerations

These are some of the most common considerations that I usually look for, when I’m planning/design an ASR implementation. Of course, that there are a lot of other aspects that you need to consider, although I always found these are always present.

Azure virtual machine-related requirements

You must ensure that your on-premises virtual machines comply with a majority of the Azure virtual machine-specific requirements. These requirements include:

  • The operating system running within each protected virtual machine must be supported by Azure.
  • The virtual machine operating system and data disk size cannot exceed 1,023 gigabytes (GBs).
  • The virtual machine data disk count cannot exceed 64.
  • The virtual machine disks cannot be Internet Small Computer System Interface (iSCSI), Fibre Channel (FC), or shared virtual hard disks.

At the present time, Azure does not support the .vhdx disk type or the Generation 2 Hyper-V virtual machine type. Instead, Azure virtual machines must use the .vhd disk type and the Generation 1 Hyper-V virtual machine type. Fortunately, these limitations are not relevant when it comes to virtual machine protection. Site Recovery is capable of automatically converting the virtual disk type and the generation of Windows virtual machines when replicating virtual machine disks to Azure Storage.

Note: At the present time, ASR does not support Generation 2 virtual machines that are running Linux.

Network-related requirements

To facilitate different types of failover, you must consider the network requirements of the systems you intend to protect. In addition, you should keep in mind that customers of protected workloads must be able to connect and authenticate to these systems following a planned, unplanned or a test failover. To accommodate these requirements, you should take into account the following factors:

  • IP address space of the Azure virtual network hosting protected virtual machines after the failover. You have two choices when deciding which IP address space to use:
    • Use the same IP address space in the recovery site and the primary site. The benefit of this approach is that virtual machines can retain their on-premises IP addresses. This eliminates the need to update DNS records associated with these virtual machines. Such updates typically introduce delay during recovery. The drawback of this approach is that you cannot establish direct connectivity via Site-to-Site VPN or ExpressRoute between your on-premises locations and the recovery virtual network in Azure.
    • Use a non-overlapping IP address space in the recovery site and the primary site. The benefit of this approach is the ability to set up direct connectivity via Site-to-Site VPN or ExpressRoute between your on-premises locations and the recovery virtual network in Azure. This allows you, for example, to provision Azure virtual machines that are hosting Active Directory domain controllers in the recovery site and keep the Azure virtual machines online during normal business operations. By having these domain controllers available, you will minimize the failover time. In addition, you can perform a partial failover, which involves provisioning only a subset of the protected virtual machines in Azure, rather than all of them. The drawback is the need to update DNS records associated with the protected virtual machines after the failover takes place. To minimize the delay resulting from the DNS changes, you can lower the Time-To-Live (TTL) value of the DNS records associated with the protected virtual machines.
  • Network connectivity between your on-premises locations and the Azure virtual network that is hosting the recovery site. You have three choices when deciding which cross-premises network connectivity method to use:

o    Point-to-Site VPN

o    Site-to-Site VPN

o    ExpressRoute

Point-to-Site VPN is of limited use in this case, because it allows connectivity from individual computers only. It might be suitable primarily for a test failover when connecting to the isolated Azure virtual network where Site Recovery provisions replicas of the protected virtual machines. For planned and unplanned failovers, you should consider ExpressRoute, because it offers several advantages over Site-to-Site VPN, including the following:

o    All communication and replication traffic will flow via a private connection, rather than the Internet.

o    The connection will be able to accommodate a high volume of replication traffic.

o    Following a failover, on-premises users might be able to benefit from consistent, high-bandwidth, and low-latency connectivity to the Azure virtual network. This assumes that the ExpressRoute circuit will remain available even if the primary site fails.

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

Bigger disks on Azure Storage

If you follow the announcements during the Microsoft Build 2017 conference on the beginning of the month, one of the announcements was the increase of the size of the disks in Azure. Azure had a hard limit of a 1TB size disks. But those days are almost over. Oh Yeah Baby!

During one session at the Build 2017 about Big data workloads with Azure Blob Storage, they announce the increase of those disk limits. Today, Microsoft announce the preview of those disks.

So, what is that means? Beside the increase of the size, they are increasing the performance of the disks as well. Be able to have more space and IOPS is always nice.

New Disk Sizes Details

This table provides more details on the exact capabilities of the new disk sizes in Azure:

Disk Type P40 (Premium) P50 (Premium) S40 (Standard) S50 (Standard)
Disk Size 2048 GB 4095 GB 2048 GB 4095 GB
Disk IOPS 7,500 IOPS 7,500 IOPS Up to 500 IOPS Up to 500 IOPS
Disk Bandwidth 250 MBps 250 MBps Up to 60 MBps Up to 60 MBps

To see the session for further details

 

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga