New version of Azure Backup Server introduces Modern Backup Storage technology

WOW! What a day for me! Microsoft Azure just announces new and improved features on the new version Azure Backup Server. Let’s start!

They announce with Azure Backup Server v2 (MABSv2) you can protect your VMWARE and Windows Server 2016 environment. That is really important feature (in my opinion). Now I don’t need to rely on third party backup software vendors to backup the workloads into Azure, special if the workload is running on ESXi.

Microsoft introduces now thought MABSv2 the Modern Backup Storage technology, based on the features available o Windows Server 2016.

With Windows Server 2016 broth enormous enhancements like ReFS cloning. MABSv2, now leverage these modern technologies, like ReFS cloning and Dynamic VHDX to reduce the overall TCO of backup. MABSv2 also introduces workload infinity technology, that helps to optimize storage utilization.

One of the features that Windows Server 2016 intruded was resilience change checking that improved virtual machine backup resiliency. MABSv2 are now resilient with Resilient Check Technology (RCT) and organizations don’t need to go with painful consistency checks scenarios like virtual machine storage migrations.

MABSv2 can detect and protect VMs that are store on Storage Spaces Direct (S2D), ReFS based cluster seemly without any manual steps. Windows Server 2016 VMs are more secure with shield VM technology and MABSv2 can protect and recovery them securely.

Windows Server 2012 R2 cluster can be upgraded to Windows Server 2016 using rolling cluster upgrade technology without bring down the production environment. MABSv2 will continue to backup the VMs and so there no miss of the SLA while the cluster upgrade is in progress.

You can also auto protect SQL workloads and VMware VMs to cloud using MABSv2. MABSv2, now also comes with the support to protect SQL 2016, SharePoint 2016 and Exchange 2016 workloads as well.

MABSv2 can protect VMware VMs, although the support is in test for MABSv2 running on Windows Server 2016.

MABSv1 deployments can be upgraded to MABSv2 with a few simple steps. MABSv2 will continue to backup data sources without rebooting production servers. So, you don’t need to worry about rebooting production servers or backup interruption with the upgrade to MABSv2.

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

Containers on Azure – Part 2

On the previous post (see here), I talked about the concept of Containers, Azure Container Service and Azure Service Fabric. Now that you know the concept and have an idea how to implement it, let see how can you deploy containers in Azure.

Azure offers several ways to provision Azure virtual machines that support Docker containers:

  • Install the Docker virtual machine extension. You can either add the extension to an existing Azure virtual machine running a Docker-supported distribution of Linux or include it when deploying a new Azure virtual machine via a Resource Manager template or a command-line script. The extension installs the Docker daemon, also called the Docker Engine; the Docker client; and Docker Compose. The Docker daemon is necessary for an Azure virtual machine to function as a Docker host.
    Note: Docker Engine is a lightweight software component that runs as a daemon on a Linux operating system. It provides the environment for running containerized apps.
    The Docker client is management software that allows you to interact with the Docker Engine via a command line, which allows you to create, run, and transfer containers.
    Docker Compose is a utility for building and running Docker apps that consist of multiple containers.
  • Provision a Docker Azure virtual machine available from the Azure Marketplace. Use the Azure portal to deploy a Linux virtual machine to run Docker containerized workloads. During the virtual machine deployment, Azure automatically provisions the Docker virtual machine extension, which installs all the necessary Docker components.
  • Deploy an ACS cluster. This allows you to provision and manage multiple instances of Docker containers residing on clustered Docker hosts.
  • Use the Docker Machine driver to deploy an Azure virtual machine with support for Docker containers. Docker Machine is a command-line utility that allows you to perform several Docker-related administrative tasks, including provisioning new Docker hosts. The utility includes support for deploying Docker hosts on-premises and on Azure virtual machines. You must include the ‑driver azure parameter when running the docker-machine create For example, the following command deploys a new Azure virtual machine named dockerazurevm1 in an Azure subscription. You specify the subscription ID, create an administrative user account named mrdocker, and enable connectivity on TCP port 80.

docker-machine create -d azure \
  –azure-ssh-user mrdocker \
  –azure-subscription-id your_Azure_subscription_ID \
  –azure-open-port 80 \
  dockerazurevm1

With the default settings, the virtual machine has the size Standard_A2 and resides on an Azure virtual network named docker-machine in a docker-machine resource group in the West US region. A default network security group associated with the virtual machine network interface allows inbound connectivity on TCP port 22 for Secure Shell connections and on TCP port 2376 for remote connections from the Docker client. The command also generates self-signed certificates that help to secure subsequent communication from the computer where you ran Docker Machine and store the corresponding private key in your user’s account profile.

For the full syntax of the docker-machine create -d azure command, refer to Microsoft Azure.

Docker Machine is available on Windows, Linux, and Mac OS X operating systems. For installation instructions and links to download locations, refer to Install Docker Machine.

Containers on a Azure Virtual Machine

Choosing the most convenient and efficient way to run containers in your environment depends on the location of Docker hosts. Docker Machine allows for managing an on-premises and Azure-based Docker host in a consistent manner.

During Azure virtual machine provisioning, Docker Machine generates the self-signed certificate that you can use to establish a Secure Shell session to the Docker host. It also stores the certificate’s private key in your user account profile. This allows you to continue managing the Azure virtual machine from the same computer on which you initiated the virtual machine provisioning. To simplify management, you should also configure environment variables within your Windows command shell. To identify the environment variables to configure, run the following at the command prompt, where dockerazurevm1 is the name of the Azure virtual machine you deployed by running the docker-machine create command.

docker-machine env dockerazurevm1

This should return output similar to the following.

SET DOCKER_TLS_VERIFY=”1”
SET DOCKER_HOST=”tcp://191.237.46.90:2376”
SET DOCKER_CERT_PATH=”C:\Users\Admin\.docker\dockerazurevm1\certs”
SET DOCKER_MACHINE_NAME=”dockerazurevm1”
@FOR /f “tokens=*” %i IN (‘docker-machine env dockervm1) DO @%i

You can now start a container on the Azure virtual machine by running the following command.

docker run -d -p 80:80 –restart=always container_name

This automatically locates the container named container_name, publishes it on port 80, initiates its execution in the detached mode, and ensures that the container always restarts after it terminates, regardless of the exit status. With the detached mode, the console session is not attached to the container process, so you can use it to continue managing the Docker host. In the attached mode, the console session displays the standard input, output, and error from the Docker container.

For the full syntax of the docker run command, refer to Docker Run.

The docker run command attempts to locate the latest version of the container locally on the Docker host. By default, it checks the version against the Docker Hub. This is a central, Docker-managed repository of Docker images available publicly via the Internet. If there is no locally cached container image or its version is out-of-date, the Docker daemon automatically downloads the latest version from the Docker Hub.

You can set up a private Docker Registry to maintain your own collection of container images. A private Docker Registry runs as a container based on the registry image available from the Docker Hub. You can store your private images in an Azure storage account.

To set up a private Docker Registry, follow this procedure:

  1. Create an Azure storage account.
  2. Start a registry container on a Docker host by running the following command.
    docker run -d -p 5000:5000 -e REGISTRY_STORAGE=azure -e REGISTRY_STORAGE_AZURE_ACCOUNTNAME=”storage-account_name” -e REGISTRY_STORAGE_AZURE_ACCOUNTKEY=”storage-account-key” -e REGISTRY_STORAGE_AZURE_CONTAINER=”registry”
    –name=registry registry:2

    In the preceding command, storage_account_name and storage_account_key represent the name and one of the two keys of the Azure storage account you created in the previous step. This provisions a new registry container and makes it accessible via TCP port 5000.
    Note: To allow inbound connections on the port that you specified when executing the docker run command, be sure to update the network security group associated with the Docker Azure VM network interface where the registry container is running.

  3. Build a new Docker image by running the docker build command, or pull an existing image from Docker Hub by running the docker pull command.
  4. Use the following docker tag command to associate the image you created or downloaded in the previous step with the private registry.
    docker tag hello-world localhost:5000/image_name

    In the preceding command, image_name represents the name of the image. This tags the image and designates a new repository in your private registry.
  5. To upload the newly tagged image to the private Docker Registry, run the following command.
    docker push localhost:5000/image_name

    This pushes the image into the private registry.

  6. To download the image from the private registry, run the following command.
    docker pull localhost:5000/image_name

    If you run the docker run command, the Docker daemon uses the newly downloaded image from your private registry.

 

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

 

Containers on Azure – Part 1

In the last decade, hardware virtualization has drastically changed the IT landscape. One of many consequences of this trend is the emergence of cloud computing. However, a more recent virtualization approach promises to bring even more significant changes to the way you develop, deploy, and manage compute workloads. This approach is based on the concept of containers.

These series of posts explain containers and the ways you can implement them in Azure and in your on-premises datacenter.

When this concept was introduced on the Microsoft world, it was somehow difficult to me, to understand the all concept and how I will use it. So, my propose here is to easy the path and explain how containers works and the ways you can implement them in Azure and in your on-premises datacenter. The goal is to facilitate the path to deploy clusters of containerized workloads by using Azure Container Services (ACS).

Azure Service Fabric offers an innovative way to design and provision applications by dividing them into small, independently operating components called microservices. Containers and microservices are complementary technologies. By combining them, you can further increase the density of your workloads, optimize resource usage, and minimize cost.

What is Containers?

On a very simplistic way, containers are the next stage in virtualizing computing resources. Hardware virtualization freed people to a large extent from the constraints imposed by physical hardware. It enabled running multiple isolated instances of operating systems concurrently on the same physical hardware. Container-based virtualization virtualizes the operating system, allowing you to run multiple applications within the same operating system instance while maintaining isolation. Containers within a virtual machine provide functionality similar to that of virtual machines on a physical server. To better understand this analogy, this topic compares virtual machines with containers.

The following table lists the high-level differences between virtual machines and containers.

Feature Virtual machines Containers
Isolation mechanism Built in to the hypervisor Relies on operating system support.
Required amount of memory Includes operating system and app requirements Includes containerized apps requirements only.
Startup time Includes operating system boot, start of services, apps, and app dependencies Includes only start of apps and app dependencies. The operating system is already running.
Portability Portable, but the image is larger because it includes the operating system More portable, because the image includes only apps and their dependencies.
Image automation Depends on the operating system and apps Based on the Docker registry (for Docker images).

To better understand the difference between Virtual Machines and Containers, I highly suggest reading the following article, refer to Virtual Machines and Containers in Azure

Compared with virtual machines, containers offer several benefits, including:

  • Increased speed with which you can develop and share application code.
  • An improved testing lifecycle for applications.
  • An improved deployment process for applications.
  • The increased density of your workloads, resulting in improved resource utilization.

The most popular containerization technology is available from Docker. Docker uses Linux built-in support for containers. Windows Server 2016 includes a container feature that deliver equivalent functionality in the Windows Server operating system.

Azure Container Service

ACS allows you to administer clusters of multiple Docker hosts running containerized apps. ACS manages the provisioning of cloud infrastructure components, including Azure virtual machines and virtual machine scale sets, Azure storage, virtual networks, and load balancers. Additionally, it provides the management and scaling of containerized apps to tens of thousands of containers via integration with the following two orchestration engines:

  • The Mesosphere Datacenter Operating System (DC/OS). A distributed operating system provided by the Apache Software Foundation.
  • Docker Swarm. Clustering software provided by Docker.

Based on this integration, you can manage ACS clusters on the DC/OS or the Docker Swarm platform by relying on the same tools you use to manage your existing containerized workflows.

You can provision an ACS cluster directly from the Azure portal. Alternatively, you can use the Azure Resource Manager template or Azure command-line interface. During provisioning, you choose either DC/OS or Docker Swarm as the framework configuration. Subsequent configuration and management specifics depend mainly on this choice. Although both orchestration engines fully support Docker-formatted containers and Linux-based container isolation, they have architectural and functional differences, including:

  • DC/OS contains a Master availability set, public agent virtual machine scale set, and private agent virtual machine scale set, with fault-tolerant master/subordinate instances replicated by using Apache ZooKeeper. Docker Swarm contains a Master availability set and the agent virtual machine scale set.
  • DC/OS includes by default the Marathon orchestration platform, which manages the cluster-wide scheduling of containerized workloads. It supports multiple-resource scheduling that takes memory, CPU, disks, and ports into consideration.
  • With Docker Swarm, you can use the Docker command-line interface or the standard Docker application programming interface (API). DC/OS offers the REST API for interacting with its orchestration platform.

Azure Service Fabric

Azure Service Fabric is a cloud-based platform for developing, provisioning, and managing distributed, highly scalable, and highly available services and applications. Its capabilities result from dividing the functionality provided by these services and applications into individual components called microservices. Common examples of such microservices include the shopping carts or user profiles of commercial websites and the queues, gateways, and caches that provide infrastructure services. Multiple instances of these microservices run concurrently on a cluster of Azure virtual machines.

This approach might sound similar to building multitier applications by using Azure Cloud Services, which allows you to independently scale web and worker tiers. However, Azure Service Fabric operates on a much more granular level, as the term microservices suggests. This allows for much more efficient resource utilization while scaling to potentially thousands of virtual machines. Additionally, it allows developers to introduce gradual changes in the code of individual application components without having to upgrade the entire application.

Another feature that distinguishes Azure Service Fabric from traditional Platform as a Service (PaaS) services is support for both stateless and stateful components. Azure Cloud Services are stateless by design. To save state information, they have to rely on other Azure services, such as Azure Storage or Azure SQL Database. Azure Service Fabric, on the other hand, offers built-in support for maintaining state information. This minimizes or even eliminates the need for a back-end storage tier. It also decreases the latency when accessing application data.

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

How to extend Azure Service Fabric to on-premise?

You can deploy a Service Fabric cluster on any physical or virtual machine running the Windows Server operating system, including ones residing in your on-premises datacenters. You use the standalone Windows Server package available for download from the Azure online documentation website (see here).

To deploy an on-premises Service Fabric cluster, perform the following steps:

  1. Plan for your cluster infrastructure. You must consider the resiliency of the hardware and network components and the physical security of their location because in this scenario, you can no longer rely on the resiliency and security features built in to the Azure platform.
  2. Prepare the Windows Server computer to satisfy the prerequisites. Each computer must be running Windows Server 2012 or Windows Server 2012 R2, have at least 2 GB of RAM, and have the Microsoft .NET Framework 4.5.1 or later. Additionally, ensure that the Remote Registry service is running.
  3. Determine the initial cluster size. At a minimum, the cluster must have three nodes, with one node per physical or virtual machine. Adjust the number of nodes according to your projected workload.
  4. Determine the number of fault domains and upgrade domains. The assignment of fault domains should reflect the underlying physical infrastructure and the level of its resiliency to a single component failure. Typically, you consider a single physical rack to constitute a single fault domain. The assignment of upgrade domains should reflect the number of nodes that you plan to take down during application or cluster upgrades.
  5. Download the Service Fabric standalone package for Windows Server.
  6. Define the cluster configuration. Specify the cluster settings before provisioning. To do this, modify the ClusterConfig.json file included in the Service Fabric standalone package for Windows Server. The file has several sections, including:
    o    NodeTypes. NodeTypes allow you to divide cluster nodes into groups represented by a node types and assign common properties to the nodes within each group. These properties specify endpoint ports, placement constraints, or capacities.
    o    Nodes. Nodes determine the settings of individual cluster nodes, such as the node name, IP address, fault domain, or upgrade domain.
  7. Run the create cluster script. After you modify the ClusterConfig.json file to match your requirements and preferences, run the CreateServiceFabricCluster.ps1 Windows PowerShell script and reference the .json file as its parameter. You can run the script on any computer with connectivity to all the cluster nodes.
  8. Connect to the cluster. To manage the cluster, connect to it via http://IP_Address_of_a_node:19080/Explorer/index.htm, where IP_Address_of_a_node is the IP address of any of the cluster nodes.

 

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

Cluster Shared Volumes (CSV) errors on Hyper-V Cluster

In a failover cluster, virtual machines can use Cluster Shared Volumes that are on the same LUN (disk), while still being able to fail over (or move from node to node) independently of one another. Virtual machines can use a Cluster Shared Volume only when communication between the cluster nodes and the volume is functioning correctly, including network connectivity, access, drivers, and other factors.

You probably didn’t notice any issues with your VMs, but If you are getting the following events on your Hyper-V Cluster nodes, regarding the CSV volume:

Warning – Disk 153

“The description for Event ID 153 from source disk cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.

The following information was included with the event:

DeviceHarddisk5DR5

the message resource is present but the message is not found in the string/message table

Information – Microsoft-Windows-FailoverClustering 5121

“The description for Event ID 5121 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.

If the event originated on another computer, the display information had to be saved with the event.

The following information was included with the event:

Volume2

Cluster Disk 2 – Volume2

Error – Microsoft-Windows-FailoverClustering 5120

“The description for Event ID 5120 from source Microsoft-Windows-FailoverClustering cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.

If the event originated on another computer, the display information had to be saved with the event.

The following information was included with the event:

Volume1

Cluster Disk 1 – Volume1

STATUS_DEVICE_BUSY(80000011)

That means there has been an interruption to communication between a cluster node and a volume in Cluster Shared Volumes. This interruption may be short enough that it is not noticeable, or long enough that it interferes with services and applications using the volume

How to resolve it

CSV – Review events related to communication with the volume

To perform the following procedure, you must be a member of the local Administrators group on each clustered server, and the account you use must be a domain account, or you must have been delegated the equivalent authority.

To open Event Viewer and view events related to failover clustering:

1. If Server Manager is not already open, click Start, click Administrative Tools, and then click Server Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.

2. In the console tree, expand Diagnostics, expand Event Viewer, expand Windows Logs, and then click System.

3. To filter the events so that only events with a Source of FailoverClustering are shown, in the Actions pane, click Filter Current Log. On the Filter tab, in the Event sources box, select FailoverClustering. Select other options as appropriate, and then click OK.

4. To sort the displayed events by date and time, in the center pane, click the Date and Time column heading.

CSV – Check storage and network configuration

To perform the following procedures, you must be a member of the local Administrators group on each clustered server, and the account you use must be a domain account, or you must have been delegated the equivalent authority.

Gathering information about the condition and configuration of a disk in Cluster Shared Volumes

To gather information about the condition and configuration of a disk in Cluster Shared Volumes:

1. Scan appropriate event logs for errors that are related to the disk.

2. Review information available in the interface for the storage and if needed, contact the vendor for information about the storage.

3. To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.

4. In the Failover Cluster Manager snap-in, expand the console tree and click Cluster Shared Volumes. In the center pane, expand the listing for the volume that you are gathering information about. View the status of the volume.

5. Still in the center pane, to prepare for testing a disk in Cluster Shared Volumes, right-click the disk, click Take this resource offline, and then if prompted, confirm your choice. Repeat this action for any other disks that you want to test.

6. Right-click the cluster containing the Cluster Shared Volumes, and then click Validate This Cluster.

7. On the Testing Options page, select Run only tests I select.

8. On the Test Selection page, clear the check boxes for System Configuration and Network. This leaves the tests for Cluster Configuration, Inventory, and Storage. You can run all these tests, or you can select only the specific tests that appear relevant to your situation.

NOTE: If you run the Storage tests you will have downtime in your Cluster. Not recommend if you are troubleshooting on a production environment.

9. Follow the instructions in the wizard to run the tests.

10. On the Summary page, click View Report.

11. Under Results by Category, click Storage, click any test that is not labelled as Success, and then view the results.

12. Scroll back to the top of the report, and under Results by Category, click Cluster Configuration, and then click List Cluster Network Information. Confirm that any network that you intend for communication between nodes and Cluster Shared Volumes is labelled either Internal use or Internal and client use. Confirm that other networks (for example, networks used only for iSCSI and not for cluster network communication) do not have these labels.

13. If the information in the report shows that one or more networks are not configured correctly, return to the Failover Cluster Manager snap-in and expand Networks. Right-click the network that you want to modify, click Properties, and then make sure that the settings for Allow the cluster to use this network and Allow clients to connect through this network are configured as intended.

14. To bring disks back online, click Cluster Shared Volumes and, in the center pane, right-click a disk, and then click Bring this resource online. Repeat this action for any other disks that you want to bring online again.

Verifying settings for a network designated for network communication with Cluster Shared Volumes

To verify settings for a network designated for network communication with Cluster Shared Volumes:

1. Click Start, click Control Panel, click Network and Internet, and then click Network and Sharing Center.

2. In the Tasks pane, click Change adapter settings.

3. Right-click the connection you want, and then click Properties.

4. Make sure that the following check boxes are selected:

  • Client for Microsoft Networks
  • File and Printer Sharing for Microsoft Networks

Verifying that the required NTLM authentication is allowed

1. On a node in the cluster, to see the security policies that are in effect locally, click Start, click Administrative Tools, and then click Local Security Policy.

2. Navigate to Security SettingsLocal PoliciesSecurity Options.

3. In the center pane, click the Policy heading to sort the policies alphabetically.

4. Review Network security: Restrict NTLM: Add remote server exceptions for NTLM authentication and the items that follow it. If items related to “server exceptions” are marked Disabled, or other items have specific settings, a policy may be in place that is interfering with NTLM authentication on this server. If this is the case, contact an appropriate administrator (for example, your administrator for Active Directory or security) to ensure that NTLM authentication is allowed for cluster nodes that are using Cluster Shared Volumes.

Opening Event Viewer and viewing events related to failover clustering

To open Event Viewer and view events related to failover clustering:

1. If Server Manager is not already open, click Start, click Administrative Tools, and then click Server Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.

2. In the console tree, expand Diagnostics, expand Event Viewer, expand Windows Logs, and then click System.

3. To filter the events so that only events with a Source of FailoverClustering are shown, in the Actions pane, click Filter Current Log. On the Filter tab, in the Event sources box, select FailoverClustering. Select other options as appropriate, and then click OK.

4. To sort the displayed events by date and time, in the center pane, click the Date and Time column heading.

Finding more information about the error codes that some event messages contain

To find more information about the error codes that some event messages contain:

1. View the event, and note the error code.

2. Look up more information about the error code in one of two ways:

NET HELPMSG errorcode

How to verify it

Confirm that the Cluster Shared Volume can come online. If there have been recent problems with writing to the volume, it can be appropriate to monitor event logs and monitor the function of the corresponding clustered virtual machine, to confirm that the problems have been resolved.

To perform the following procedures, you must be a member of the local Administrators group on each clustered server, and the account you use must be a domain account, or you must have been delegated the equivalent authority.

Confirming that a Cluster Shared Volume can come online

To confirm that a Cluster Shared Volume can come online:

1. To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.

2. In the Failover Cluster Manager snap-in, if the cluster you want to manage is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.

3. If the console tree is collapsed, expand the tree under the cluster you want to manage, and then click Cluster Shared Volumes.

4. In the center pane, expand the listing for the volume that you are verifying. View the status of the volume.

5. If a volume is offline, to bring it online, right-click the volume and then click Bring this resource online.

Using a Windows PowerShell command to check the status of a resource in a failover cluster

To use a Windows PowerShell command to check the status of a resource in a failover cluster:

1. On a node in the cluster, click Start, point to Administrative Tools, and then click Windows PowerShell Modules. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.

2. Type: Get-ClusterSharedVolume

If you run the preceding command without specifying a resource name, status is displayed for all Cluster Shared Volumes in the cluster.