Instant Recovery Point and Large Disk Azure Backup support

With everything that happens on Azure, and following what has been announced of the increase of the size of the disk in Azure, from 1TB to 4TB, the only missing part of this was the support of Azure Backup to be able to backup and recovery those volumes.

But what changed? Today the Azure Backup job consist of the Two phases:

  1. Taking a VM snapshot
  2. Transferring the VM snapshot to Azure Backup Vault

So, depending how many recovery points you configure on your policy, it will only be available a recovery point when both phases are complete. With the introduction of Instant Recovery Points feature on Azure Backup, a recovery point is created as soon as the snapshot is finished. That means that you RPO and RTO can be reduced significantly.

You can use the same restore flow on Azure Backup, to restore from this instant recovery point. For this you can identify the recovery point from a snapshot in the Azure Portal, using the Snapshot as a recovery point type. Once the snapshot is on the Azure Backup Vault, the recovery point type will change to Snapshot and Vault.

By default, the snapshots are retained for 7 days. This will allow you to complete restore way faster, from these snapshots and at the same time, reducing the time required to copy the backup from the vault to the storage account where you want to restore.

Instant Recovery Point Features

Please note that all the features are not yet available, this is still on preview

  1. Ability to see snapshot taken as part of backup job to be available for recovery without waiting for data transfer to complete.Note: that this will reduce the wait on snapshot to be copied to vault before triggering restore. Also, this will eliminate the additional storage requirement we have for backing up premium VMs.
  2. As part of above feature, we will also enable some data integrity checks. This will take some additional time as part of backup. We will be relaxing these checks as we move and so it will reduce backup times.
  3. Support for 4TB unmanaged disks
  4. Ability to use original storage accounts (even when VM has disks are distributed across storage accounts). This will make restores faster for a wide variety of VM configurations.Note: this is not same as overriding the original VM.
  5. Ability to do above things for managed disks.

 

Is important to know that when you enable this feature you will notice the following:

Since the snapshot are store on the Azure Backup vault, to reduce the recovery point and reduce the restore time, you will see some increase on the storage cost, corresponding to the snapshots that are store for 7 days (if you go with the defaults).

When you are doing a restore from a snapshot recovery point for a Premium VM, you will might see a temporary storage location being used while the VM is created, as part of the restore.

Once you enable the preview feature, you can’t revert, that means you can go back and all the future backups will use this feature.

If you have the VMs with Managed Disks, this feature is not support yet. Although if you have VMs that are using Managed Disks, is supported, but they will be using the normal backup (the Instant Recovery Point will not be used, in this case). Virtual Machines migrations from unmanaged and managed are not supported.

If you want to try this feature, run the following commands:

  1. Open PowerShell with elevated privilege
  2. Login to your Azure Account
    Login-AzureRmAccount
  3. Select the subscription you want to enable the Instant Recovey Point feature
    Get-AzureRmSubscription –SubscriptionName “<SUBSCRIPTION_NAME>” | Select-AzureRmSubscription
  4. Register for the preview
    Register-AzureRmProviderFeature -FeatureName “InstantBackupandRecovery” –ProviderNamespace Microsoft.RecoveryServices

 

Cheers,

Marcos Nogueira
Azure MVP

azurecentric.com
Twitter: @mdnoga

Managed disk on Azure

You probably already saw this when you are creating a Virtual Machine on Azure. After you insert the basic information like Name of the VM, choose the size, then comes the time to define and configure the settings of that VM. One of the first thing is the Use of managed disk.

But what is managed disks? How they work? What are the implication of using the Managed disks?

So, first thing, Managed Disk allow you to abstract the storage accounts where you will use on your virtual machine (see pictures below). When you select that you want to use managed disk, you don’t have to setup or choose the storage account where those disks will be stored.

When you don’t want to use Managed disks, you have to select the storage account.

With Managed disk, you only have to specify the size of the disk, and Azure manage for you. That allows you more granular access control. You don’t have to care with the storage account limits and you will gain higher scalability, meaning that you can create up to 10000 disks per region per subscription.

Managed disk will increase your resilience for your availability sets, by making sure that the disk will belong to a storage unit that is on a different fault domain. In my experience, when you create storage account, it’s not guarantee that your storage account will be on a different fault domain. That scenario, even if you use availability sets on the setup, doesn’t avoid a single point of failure.

But if you are thinking, that you prefer to use storage accounts, to control the access to the VHDs, with managed disks you can use RBAC as well, to assign the permissions for a managed disk to one or more users. In this scenario, you have to managed disk by disk, and not to the entire storage account. That means more granular access control. You can prevent, for example, a user of copy that vhd, but still use the virtual machine.

The integration with Azure Backup is great. You can use Azure Backup Service with managed disk to create a backup job that will easy your VM restoration. Managed disks although, only support the Locally Redundant Storage (LRS) as a replication option, this mean that 3 copies of the vhd within the region.

To resume, here are the benefits of managed disks:

  • Simple and scalable VM deployment
  • Better reliability for Availability Sets
  • Granular Access control
  • Azure Backup service support

Cheers,

Marcos Nogueira
Azure MVP

azurecentric.com
Twitter: @mdnoga

Bigger disks on Azure Storage

If you follow the announcements during the Microsoft Build 2017 conference on the beginning of the month, one of the announcements was the increase of the size of the disks in Azure. Azure had a hard limit of a 1TB size disks. But those days are almost over. Oh Yeah Baby!

During one session at the Build 2017 about Big data workloads with Azure Blob Storage, they announce the increase of those disk limits. Today, Microsoft announce the preview of those disks.

So, what is that means? Beside the increase of the size, they are increasing the performance of the disks as well. Be able to have more space and IOPS is always nice.

New Disk Sizes Details

This table provides more details on the exact capabilities of the new disk sizes in Azure:

Disk Type P40 (Premium) P50 (Premium) S40 (Standard) S50 (Standard)
Disk Size 2048 GB 4095 GB 2048 GB 4095 GB
Disk IOPS 7,500 IOPS 7,500 IOPS Up to 500 IOPS Up to 500 IOPS
Disk Bandwidth 250 MBps 250 MBps Up to 60 MBps Up to 60 MBps

To see the session for further details

 

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

How to use Blob containers in Azure

I know that Microsoft Azure looks easy, because you create your subscription, then you start to consume all the resources. But in some cases, start to be overwhelming, with so many details that you must take in consideration, it’s not easy to take advantage of what Azure have to offer you.

Regarding the Azure storage, sounds easy but, in a lot of cases I’m seeing some implementations that are not following the best practices and not secure. For example, what level of access should I give to the blob? Is the default configuration secured?

Blobs store directly in the root container of the storage account or within a container that is created after the account is provisioned. You can create blob containers by using any of the tools that you are comfortable with.

Creating blob containers

When you create a container, you must give it a name and choose the level of access that you want to allow from the following options:

  • Private. This is the default option. The container does not allow anonymous access.
  • Public Blob. This option allows anonymous access to each blob within the container; however, it prevents browsing the content of the container. In other words, it is necessary to know the full path to the target blob to access it.
  • Public Container. This option allows anonymous access to each blob within the container, with the ability to browse the container’s content.

Use the following commands in Windows PowerShell to create a new container. Before you can create the container, you must obtain a storage context object by passing the storage account’s primary key:

Creating a blob container in Windows PowerShell

$storageKey = (Get-AzureRmStorageAccountKey –ResourceGroup ‘myResourceGroup’ -StorageAccountName $storageAccount).Value[0]
$storeContext = New-AzureStorageContext -StorageAccountName ‘mystorageaccount’ -StorageAccountKey $storeKey
$container = New-AzureStorageContainer –Name ‘mycontainer’ -Permission Container -Context $storeContext

Administrators can view and modify containers, in addition to uploading and copying blobs by using tools such as AzCopy and Azure Storage Explorer, or they can use the following Azure PowerShell cmdlets:

  • Get-AzureStorageBlobCopyState. Get the copy state of a specified storage blob.
  • Remove-AzureStorageBlob. Remove the specified storage blob.
  • Set-AzureStorageBlobContent. Upload a local file to the blob container.
  • Start-AzureStorageBlobCopy. Copy to a blob.
  • Stop-AzureStorageBlobCopy. Stop copying to a blob.

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

Designing your Azure Storage – Part 2

On the previous post, we talked about the designing standard Azure Storage (see here). On this post, I will focus more on the Premium Azure Storage designing.

While it is possible to aggregate the throughput of Azure-hosted virtual disks in Azure standard storage accounts by creating multiple disk volumes, this approach might not be sufficient to satisfy the I/O needs of the most demanding Azure IaaS virtual machine workloads. To account for these needs, Microsoft offers a high-performance storage service known as Premium Storage.

Virtual machines that use Premium Storage are capable of delivering throughput exceeding 100,000 IOPS by combining the benefits that two separate components offer. The first one is the SSD-based premium storage account, where virtual machine operating system and data disk files reside. The second one, known as Blobcache, is part of the virtual machine configuration, and it is available only on the DS and GS virtual machine series.

Blobcache is a relatively complex caching mechanism, which benefits from SSD storage on the Hyper-V host where the virtual machine is running.

There are separate limits applicable to the volume of I/O transfers between a virtual machine and a Premium Storage account, and between a virtual machine and a local cache. As a result, the effective throughput limit of a virtual machine is determined by combining the two limits. In case of the largest virtual machine sizes, this cumulative limit exceeds 100,000 IOPS (with the 256 KB size of a single IOP), or 1 GB per second, whichever is lower. Keep in mind that the ability to benefit from caching is highly dependent on I/O usage patterns. For example, read caching would yield no advantages on disks that Microsoft SQL Server transaction logs use, but it would likely provide some improvement for disks that SQL Server database files use.

However, virtual machine I/O throughput is only the first of two factors that determine the overall maximum I/O throughput. The throughput of virtual machine disks also affects effective throughput. In the case of Premium Storage, this throughput depends on the disk size, and it is assigned one of the following performance levels:

  • Disk sizes of up to 128 GB, offering 500 IOPS or 100 MB per second.
  • Disk sizes of up to 512 GB, offering 2,300 IOPS or 150 MB per second.
  • Disk sizes of up to 1 TB, offering 5,000 IOPS or 200 MB per second.

Azure Premium Storage pricing

Azure Premium Storage pricing is calculated based on the size of the disks that you provision, rounded up to the nearest performance level.

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga