Deploy ARM Templates using Key Vault

I’ve been deploying to Azure a lot of resources, one of my favorite things is to create templates where I can reuse for other situations. But sometimes, and in some situations, you need to increase the security. That is where I started to leverage the Key Vault to store my secrets.

Over the years, I have been creating a linked template reference where the main template passes parameters and the key vault references to the linked template. Although the template is uploaded to a storage blob container.

Let me exemplify. Imagine that you have the following scenario, you want to leverage the Azure Portal to deploy ARM Templates and you want to use the Key Vault to store the secrets. How I can do it? Although, your Manager concern is, since the URI is public how I can protect the storage container?

The way that I have been getting around on this situation is using private blob storage in conjunction with SAS.

Step 1 – Use Azure File Copy to copy your template to the storage account. Azure File Copy will give you a SAS token. Then use the token to deploy.

Step 2 – On Azure Resource Group Deployment, you have an option to override the template parameters.

Step 3 – You only need to append the SAS token to the final URL.

Cheers,

Marcos Nogueira
Azure MVP
azurecentric.com
Twitter: @mdnoga

Load Balanced and Availability Set with multiple VMs

When it comes to best practices to how to setup multiple virtual machines using a load balanced and availability set, the information out there is either outdated or hard to find.

What is the scenario? Imagine that you need to set a few VMs that need to be shared the configuration and some files between them. How you could do it?

After a few searches on the web, I come across with the IIS and Azure Files blog post. Although this post is dated of October 2015, and as you know, Azure is changing in a very fast pace. My first though was, is this still applicable? After a few tests on my test environment, I found that it’s! Surprisingly! So, if you follow all the steps in the post you may configured your environment.

In my case, there was a specific requirement that this approach wasn’t applicable. My workloads required low latency. So, I went again searching how I could achieve this. And then I found the solution on GitHub! Microsoft publish a template that the only thing you need is fill the blanks. THANK YOU!

This is the template that I’m referring too, 201-vmss-win-iis-app-ssl.

Solution overview and deployed resources

This template will create the following Azure resources

  1. A VNet with two subnets. The VNet and the subnet IP prefixes are defined in the variables section i.e. appVnetPrefix, appVnetSubnet1Prefix & appVnetSubnet2Prefix respectively. Set these two accordingly.
  2. A NSG to allow http, https and rdp access to the VMSS. The NSG is assigned to the subnets.
  3. Two NICs, two Public IPs and two VMSSs with Windows Server 2012 R2
    3.1) The first VMSS is used for hosting the WebSite and the 2nd VMSS is used for hosting the Services (WebAPI/WCF etc.) 3.2) The VMSSs are load balanced with Azure load balancers. The load balancers are configured to allow RDP access by port ranges 3.3) The VMSSs are configured to auto scale based on CPU usage. The scaled out instances are automatically configured with Windows features, application deployment packages, SSL Certificates, the necessary IIS sites and SSL bindings
  4. The 1st VMSS is deployed with a pfx certificate installed in the specified certificate store. The source of the certificate is stored in an Azure Key Vault
  5. The DSC script configures various windows features like IIS/Web Role, IIS Management service and tools, .Net Framework 4.5, Custom login, request monitoring, http tracking, windows auth, application initialization etc.
  6. DSC downloads Web Deploy 3.6 & URL Rewrite 2.0 and installs the modules
  7. DSC downloads an application deployment package from an Azure Storage account and installs it in the default website
  8. DSC finds the certificate from the local store and create a 443 binding
  9. DSC creates the necessary rules, so any incoming http traffic gets automatically redirected to the corresponding https end points

The following resources are deployed as part of the solution

A VNet with two subnet

The VNet and the subnet IP prefixes are defined in the variables section i.e. appVnetPrefix, appVnetSubnet1Prefix & appVnetSubnet2Prefix respectively. Set these two accordingly.

  • NSG to define the security rules – It defines the rules for http, https and rdp acces to the VMSS. The NSG is assigned to the subnets
  • Two NICs, two Public IPs and two VMSSs with Windows Server 2012 R2
  • Two Azure load balancers one each for the VMSSs
  • A Storage accounts for the VMSS as well as for the artifacts

Prerequisites

  1. You should have a custom domain ready and point the custom domain to the FQDN of the first public IP/Public IP for the Web Load balancer
  2. SSL certificate: You should have a valid SSL certificate purchased from a CA or be self signed
  3. Create an Azure KeyVault and upload the certificate to the KeyVault. Currently, Azure KeyVault supports certificates in pfx format. If the certificates are not in pfx format then import those to a windows cert store on a local machine and then export those to a pfx format with embeded private key and root certificate.

 

Cheers,

Marcos Nogueira
Azure MVP
azurecentric.com
Twitter: @mdnoga

 

Multiple level alerts with ARM Template

If you run into the situation, that you want to set multiple activity alerts into a resource that you want to monitor, but when you configure or want to edit the alert, you only see a single level of alert (picture below), you normally create another alert into the same resource.

That is a way to solve the issue, but you can create or have multiple level of alerts into the same resource. On the other hand, you could create a multiple level alert through JSON file and then apply the template to the resource you want to monitor.

The Activity Log Alert language is actually pretty powerful if you are willing to get your hands a little dirty and write the “condition” property in JSON yourself. For example, if you create an alert in the portal, and then look at the “Create Activity Log Alert” event in your Activity Log, you will see in the properties field there is the full JSON (unfortunately, delimited and in one field) of the alert that was created, and the “condition” property for an alert looks fairly similar to the JSON for ARM policy. It can contain:

  1. Both allOf (ANDs) as well as anyOf (ORs)
  2. Equals (on a property that has a single value) or containsAny (on a property that is an Array)
  3. Either an explicit field name (eg “category”) or a JSON path with wildcards to any property that matches (eg. “properties.impactedServices[?(@.ServiceName == ‘Virtual Machines’)].ImpactedRegions[*].RegionName”)

Here’s a complex example of what you could put in the condition in raw JSON that would work correctly:

{

    “location”: “global”,

    “properties”: {

        “scopes”: [

            “/subscriptions/<SUBSCRIPTION_ID>”

        ],

        “description”: “TEST”,

        “condition”: {

            “allOf”: [

                {

                    “field”: “category”,

                    “equals”: “ServiceHealth”

                },

                {

                    “field”: “status”,

                    “equals”: “Active”

                },

                {

                    “field”: “properties.impactedServices[?(@.ServiceName == ‘Virtual Machines’)].ImpactedRegions[*].RegionName”,

                    “containsAny”: [

                        “EastUS2”,

                        “WestUS2”

                    ]

                }

            ],

            “anyOf”: [

                {

                    “field”: “level”,

                    “equals”: “Warning”

                },

                {

                    “field”: “level”,

                    “equals”: “Error”

                }

            ]

        },

        “actions”: {

            “actionGroups”: [

                {

                    “actionGroupId”: “/subscriptions/<SUBSCRIPTION_ID>/resourcegroups/default-activitylogalerts/providers/microsoft.insights/actiongroups/<GROUP_NAME>”,

                    “webhookProperties”: {}

                }

            ]

        },

        “enabled”: true

    }

}

This translates to: “Activate the alert if there is an Active Service Health event on Virtual Machines in either East US 2 or West US 2, but only if the level is either Warning or Error.”

Cheers,

Marcos Nogueira
Azure MVP

azurecentric.com
Twitter: @mdnoga

Move VM between VNETs in Azure

This week I come into this scenario, I would like to move a virtual machine in azure between different VNETs. You might have different reasons to do it, but what is the best way to do it?

First you have to understand the scenario, is between VNETs on the same region, or between regions? Same subscription or different subscriptions? And at last same tenant or between different tenants?

The way that I look into to this is simple. I know that you have different ways to approach these scenarios, but I want to try to create a solution that no matter what you could use it.

Let’s work on the possibilities. What we know:

  • When you create a VM in Azure, you create several resources (compute, network and storage)
  • When you delete the VM in Azure, you only delete de compute (assuming that you click on the delete button and you didn’t delete the resource group). That means the VHD and the network adapter (and all their dependencies) will remain intact.

So we could use this “orphan” resources (objects) to create a new VM on the VNET that we want. Genius! 😊

In this case we could use the script that I publish to create the VM with the existing disk (see here). That is one option.

Although, if you are on the path of using ARM Template with JSON, you might want to double check if your JSON template reflects that as well (see here).

This is another way to solve your issue of moving a VM between VNETS.

Cheers,

Marcos Nogueira
Azure MVP

azurecentric.com
Twitter: @mdnoga

Deploy a ARM VM using an existing VHD in Azure

Another day, one of my costumer wants to rebuild a virtual machine from the existing VHD and place on the new Resource Group and on a different VLAN, but without transferring VHD. The idea was to park the VHD on a storage account to avoid transferring this huge VHD.

First, I want to clarify if you delete the VM, you are not deleting the all the resources, that means that the vhd(s), network adapter9s) or the network IPs will remain intact. You are only deleting the compute section of the VM. That means you can redeploy using the same configuration, or change the network, for example.

To achieve that though, you need to do it through PowerShell and/or using JSON files.

So, if you change the original JSON file just replacing the VHD you will probably get an error message saying, “Cannot attach an existing OS disk if the VM is created from a platform or user image.

To avoid that you have to change the JSON file to reflect createOption to use the attach method instead.

Here is what you need to change:

Original JSON:

“storageProfile”: {

“imageReference”: {

“publisher”: “MicrosoftWindowsServer”,

“offer”: “WindowsServer”,

“sku”: “[parameters(‘windowsOSVersion’)]”,

“version”: “latest” },

“osDisk”: { “createOption”: “FromImage” },

 

Replace with:

“storageProfile”: {

“osDisk”: { “createOption”: “attach”,

“managedDisk”: {

“id”: [Managed_Disk_ID] },

                }

}

 

Cheers,

Marcos Nogueira
Azure MVP
azurecentric.com
Twitter: @mdnoga