Instant Recovery Point and Large Disk Azure Backup support

With everything that happens on Azure, and following what has been announced of the increase of the size of the disk in Azure, from 1TB to 4TB, the only missing part of this was the support of Azure Backup to be able to backup and recovery those volumes.

But what changed? Today the Azure Backup job consist of the Two phases:

  1. Taking a VM snapshot
  2. Transferring the VM snapshot to Azure Backup Vault

So, depending how many recovery points you configure on your policy, it will only be available a recovery point when both phases are complete. With the introduction of Instant Recovery Points feature on Azure Backup, a recovery point is created as soon as the snapshot is finished. That means that you RPO and RTO can be reduced significantly.

You can use the same restore flow on Azure Backup, to restore from this instant recovery point. For this you can identify the recovery point from a snapshot in the Azure Portal, using the Snapshot as a recovery point type. Once the snapshot is on the Azure Backup Vault, the recovery point type will change to Snapshot and Vault.

By default, the snapshots are retained for 7 days. This will allow you to complete restore way faster, from these snapshots and at the same time, reducing the time required to copy the backup from the vault to the storage account where you want to restore.

Instant Recovery Point Features

Please note that all the features are not yet available, this is still on preview

  1. Ability to see snapshot taken as part of backup job to be available for recovery without waiting for data transfer to complete.Note: that this will reduce the wait on snapshot to be copied to vault before triggering restore. Also, this will eliminate the additional storage requirement we have for backing up premium VMs.
  2. As part of above feature, we will also enable some data integrity checks. This will take some additional time as part of backup. We will be relaxing these checks as we move and so it will reduce backup times.
  3. Support for 4TB unmanaged disks
  4. Ability to use original storage accounts (even when VM has disks are distributed across storage accounts). This will make restores faster for a wide variety of VM configurations.Note: this is not same as overriding the original VM.
  5. Ability to do above things for managed disks.

 

Is important to know that when you enable this feature you will notice the following:

Since the snapshot are store on the Azure Backup vault, to reduce the recovery point and reduce the restore time, you will see some increase on the storage cost, corresponding to the snapshots that are store for 7 days (if you go with the defaults).

When you are doing a restore from a snapshot recovery point for a Premium VM, you will might see a temporary storage location being used while the VM is created, as part of the restore.

Once you enable the preview feature, you can’t revert, that means you can go back and all the future backups will use this feature.

If you have the VMs with Managed Disks, this feature is not support yet. Although if you have VMs that are using Managed Disks, is supported, but they will be using the normal backup (the Instant Recovery Point will not be used, in this case). Virtual Machines migrations from unmanaged and managed are not supported.

If you want to try this feature, run the following commands:

  1. Open PowerShell with elevated privilege
  2. Login to your Azure Account
    Login-AzureRmAccount
  3. Select the subscription you want to enable the Instant Recovey Point feature
    Get-AzureRmSubscription –SubscriptionName “<SUBSCRIPTION_NAME>” | Select-AzureRmSubscription
  4. Register for the preview
    Register-AzureRmProviderFeature -FeatureName “InstantBackupandRecovery” –ProviderNamespace Microsoft.RecoveryServices

 

Cheers,

Marcos Nogueira
Azure MVP

azurecentric.com
Twitter: @mdnoga

Disaster Recovery solution within Azure – Part 2

On the previous post (see here), I create the Recovery Service vault that is required to configured the Site Recovery infrastructure to protect the workloads, in order to have a Disaster Recovery solution within Azure. In the post, I will show how you can protect your workloads (Azure VMs) from one region into another region.

First step is to prepare the infrastructure. Azure Site Recovery have many scenarios that you can protect the workloads, but in these case, I will only cover the Azure VM protection to another region.

As mention on the previous blog post, my workloads are running on the West US 2 region. After creating the Recovery Services vault on East US 2 region, I need to prepare the infrastructure.

To step up the infrastructure follow the steps:

  1. On the Recovery Services vault, click on Site Recovery, under GETTING STARTED
  2. It will open another blade. Click on Prepare Infrastructure

  3. Select Azure – PREVIEW, under Where are your machines located?
  4. Make sure that you select To Azure on Where do you want to replicate your machines to?
  5. Click OK

  6. Fill all the details required:
    1. Source Location – is the region where your workloads are running
    2. Azure virtual machine deployment model – make sure that you select Resource Manager
    3. Source resource group – is the recourse group where your workloads are running
      NOTE: If you have more than one resource group on the same region, you must run this setup again, to add more workloads located on another resource group.
  7. Click OK to proceed
  8. Select the workloads that you want to protect.
  9. Click OK

  10. On the Configure settings blade, click Create target resources button to conclude the preparation of the infrastructure.
    NOTE: Under Target location, by default choose the location where you create the Recovery Services vault, although you can select another region where do you want to replicate too. It’s not recommend that you choose the location where your workloads are running.

  11. If you do want to change the default settings, then you can click on Customize. Otherwise you can skip to the last step.
    There are two different settings that you can customize:

    1. Resource group, Network, Storage and Availability sets – On this setting you will configure witch resource group, network, storage account and availability set your workload will run, when your failover the virtual machine.
    2. Replication policy – is where you change the name of the replication policy, RPO and the frequency of the replication.
  12. If you want to change any of the following setting:
    1. Target resource group – This is the Resource group where your workload will run in case of failover. On the drop down, list you will see only the resource group available on the region that you previous select. Although you can either create a new (by default) or use an existing one.
    2. Target virtual network – This is where you can define witch network your workload will run in case of failover. On the drop down, list you will see only the networks available on the region that you previous select. Although you can either create a new (by default) or use an existing one.
    3. Storage accounts
      1. Target Storage – This is where your workload will be replicated too. On the drop down, list you will see only the storages accounts available on the region that you previous select. Although you can either create a new (by default) or use an existing one.
      2. Cache Storage – This is where your workload will be replicated too. On the drop down, list you will see only the storages accounts available on the region that you previous select. Although you can either create a new (by default) or use an existing one.
    4. Availability sets – This is availability set that your workload will be running in case of failover. On the drop down, list you will see only the availability sets available on the region that you previous select. Although you can either create a new (by default), use an existing one or choose not to set an availability set (Not Applicable option).
  13. After you change the settings that you want, click OK

  14. If you want to change the policies setting, these are your options:
    1. Choose by creating a new policy or an existing one.
      NOTE: If you are running these for the first time, it’s recommended that you create a new policy. Although if you are running for the second or more times, you can either choose an existing policy (if the settings are the same) or create a new policy, if the settings are different. It’s not recommended that you create new policies with the same settings.
    2. Name – This is where you can change the name of the policy
    3. Recovery point retention – This is where you can configure how long do you want to keep each recovery point.
    4. App consistent snapshot frequency – This is where you can choose the frequency of the replication.
  15. After you change the settings that you want, click OK

  16. Click Enable replication button, to start the workload protection.

  17. After the configuration is done. Azure will start to replicate the workload from on region to another. The time of the replication it will depend on the size of the disks attached to the workload.

All this process is live. That means you don’t have any downtime while Azure is doing the initial replication.

Cheers,

Marcos Nogueira
Azure MVP

azurecentric.com
Twitter: @mdnoga

Load Balanced and Availability Set with multiple VMs

When it comes to best practices to how to setup multiple virtual machines using a load balanced and availability set, the information out there is either outdated or hard to find.

What is the scenario? Imagine that you need to set a few VMs that need to be shared the configuration and some files between them. How you could do it?

After a few searches on the web, I come across with the IIS and Azure Files blog post. Although this post is dated of October 2015, and as you know, Azure is changing in a very fast pace. My first though was, is this still applicable? After a few tests on my test environment, I found that it’s! Surprisingly! So, if you follow all the steps in the post you may configured your environment.

In my case, there was a specific requirement that this approach wasn’t applicable. My workloads required low latency. So, I went again searching how I could achieve this. And then I found the solution on GitHub! Microsoft publish a template that the only thing you need is fill the blanks. THANK YOU!

This is the template that I’m referring too, 201-vmss-win-iis-app-ssl.

Solution overview and deployed resources

This template will create the following Azure resources

  1. A VNet with two subnets. The VNet and the subnet IP prefixes are defined in the variables section i.e. appVnetPrefix, appVnetSubnet1Prefix & appVnetSubnet2Prefix respectively. Set these two accordingly.
  2. A NSG to allow http, https and rdp access to the VMSS. The NSG is assigned to the subnets.
  3. Two NICs, two Public IPs and two VMSSs with Windows Server 2012 R2
    3.1) The first VMSS is used for hosting the WebSite and the 2nd VMSS is used for hosting the Services (WebAPI/WCF etc.) 3.2) The VMSSs are load balanced with Azure load balancers. The load balancers are configured to allow RDP access by port ranges 3.3) The VMSSs are configured to auto scale based on CPU usage. The scaled out instances are automatically configured with Windows features, application deployment packages, SSL Certificates, the necessary IIS sites and SSL bindings
  4. The 1st VMSS is deployed with a pfx certificate installed in the specified certificate store. The source of the certificate is stored in an Azure Key Vault
  5. The DSC script configures various windows features like IIS/Web Role, IIS Management service and tools, .Net Framework 4.5, Custom login, request monitoring, http tracking, windows auth, application initialization etc.
  6. DSC downloads Web Deploy 3.6 & URL Rewrite 2.0 and installs the modules
  7. DSC downloads an application deployment package from an Azure Storage account and installs it in the default website
  8. DSC finds the certificate from the local store and create a 443 binding
  9. DSC creates the necessary rules, so any incoming http traffic gets automatically redirected to the corresponding https end points

The following resources are deployed as part of the solution

A VNet with two subnet

The VNet and the subnet IP prefixes are defined in the variables section i.e. appVnetPrefix, appVnetSubnet1Prefix & appVnetSubnet2Prefix respectively. Set these two accordingly.

  • NSG to define the security rules – It defines the rules for http, https and rdp acces to the VMSS. The NSG is assigned to the subnets
  • Two NICs, two Public IPs and two VMSSs with Windows Server 2012 R2
  • Two Azure load balancers one each for the VMSSs
  • A Storage accounts for the VMSS as well as for the artifacts

Prerequisites

  1. You should have a custom domain ready and point the custom domain to the FQDN of the first public IP/Public IP for the Web Load balancer
  2. SSL certificate: You should have a valid SSL certificate purchased from a CA or be self signed
  3. Create an Azure KeyVault and upload the certificate to the KeyVault. Currently, Azure KeyVault supports certificates in pfx format. If the certificates are not in pfx format then import those to a windows cert store on a local machine and then export those to a pfx format with embeded private key and root certificate.

 

Cheers,

Marcos Nogueira
Azure MVP
azurecentric.com
Twitter: @mdnoga

 

Multiple level alerts with ARM Template

If you run into the situation, that you want to set multiple activity alerts into a resource that you want to monitor, but when you configure or want to edit the alert, you only see a single level of alert (picture below), you normally create another alert into the same resource.

That is a way to solve the issue, but you can create or have multiple level of alerts into the same resource. On the other hand, you could create a multiple level alert through JSON file and then apply the template to the resource you want to monitor.

The Activity Log Alert language is actually pretty powerful if you are willing to get your hands a little dirty and write the “condition” property in JSON yourself. For example, if you create an alert in the portal, and then look at the “Create Activity Log Alert” event in your Activity Log, you will see in the properties field there is the full JSON (unfortunately, delimited and in one field) of the alert that was created, and the “condition” property for an alert looks fairly similar to the JSON for ARM policy. It can contain:

  1. Both allOf (ANDs) as well as anyOf (ORs)
  2. Equals (on a property that has a single value) or containsAny (on a property that is an Array)
  3. Either an explicit field name (eg “category”) or a JSON path with wildcards to any property that matches (eg. “properties.impactedServices[?(@.ServiceName == ‘Virtual Machines’)].ImpactedRegions[*].RegionName”)

Here’s a complex example of what you could put in the condition in raw JSON that would work correctly:

{

    “location”: “global”,

    “properties”: {

        “scopes”: [

            “/subscriptions/<SUBSCRIPTION_ID>”

        ],

        “description”: “TEST”,

        “condition”: {

            “allOf”: [

                {

                    “field”: “category”,

                    “equals”: “ServiceHealth”

                },

                {

                    “field”: “status”,

                    “equals”: “Active”

                },

                {

                    “field”: “properties.impactedServices[?(@.ServiceName == ‘Virtual Machines’)].ImpactedRegions[*].RegionName”,

                    “containsAny”: [

                        “EastUS2”,

                        “WestUS2”

                    ]

                }

            ],

            “anyOf”: [

                {

                    “field”: “level”,

                    “equals”: “Warning”

                },

                {

                    “field”: “level”,

                    “equals”: “Error”

                }

            ]

        },

        “actions”: {

            “actionGroups”: [

                {

                    “actionGroupId”: “/subscriptions/<SUBSCRIPTION_ID>/resourcegroups/default-activitylogalerts/providers/microsoft.insights/actiongroups/<GROUP_NAME>”,

                    “webhookProperties”: {}

                }

            ]

        },

        “enabled”: true

    }

}

This translates to: “Activate the alert if there is an Active Service Health event on Virtual Machines in either East US 2 or West US 2, but only if the level is either Warning or Error.”

Cheers,

Marcos Nogueira
Azure MVP

azurecentric.com
Twitter: @mdnoga

Move VM between VNETs in Azure

This week I come into this scenario, I would like to move a virtual machine in azure between different VNETs. You might have different reasons to do it, but what is the best way to do it?

First you have to understand the scenario, is between VNETs on the same region, or between regions? Same subscription or different subscriptions? And at last same tenant or between different tenants?

The way that I look into to this is simple. I know that you have different ways to approach these scenarios, but I want to try to create a solution that no matter what you could use it.

Let’s work on the possibilities. What we know:

  • When you create a VM in Azure, you create several resources (compute, network and storage)
  • When you delete the VM in Azure, you only delete de compute (assuming that you click on the delete button and you didn’t delete the resource group). That means the VHD and the network adapter (and all their dependencies) will remain intact.

So we could use this “orphan” resources (objects) to create a new VM on the VNET that we want. Genius! 😊

In this case we could use the script that I publish to create the VM with the existing disk (see here). That is one option.

Although, if you are on the path of using ARM Template with JSON, you might want to double check if your JSON template reflects that as well (see here).

This is another way to solve your issue of moving a VM between VNETS.

Cheers,

Marcos Nogueira
Azure MVP

azurecentric.com
Twitter: @mdnoga