Azure Cloud Shell vs Azure CLI

At a local community event, after my presentation I was answering some questions and one of the attendee ask me if Azure Cloud Shell is the same tool as Azure CLI.

So here is, Azure Cloud Shell is a containerized based shell running in the Azure Portal (or in the browser), using either a Linux (Bash) or Windows (PowerShell) base containers. Both containers are supported by Azure CLI, as well as the Windows based containers supporting Azure PowerShell. For more information How to use Azure Cloud Shell, visit the previous post (here).

As far as I know, Microsoft updates the CLI and the PowerShell version on a regular basis to whatever is the most stable and updated version. The update is applied to the respective container (Bash to the Linux container, and PowerShell to the Windows container (Windows Server 2016)), so everyone is always running the latest version on Azure Cloud Shell.

Is that mean that when we enable the Azure Cloud Shell, we are creating 2 containers? Yes. If you see the previous post, about how to use Azure Cloud Shell (see link above), you will see that on creation of the resources needed, beside the Resource Groups, Storage Accounts and File Shares, you are also creating the containers. Although, those are on the fly. That means every time you open the Azure Cloud Shell, you are connecting to the respective container.

Bash and PowerShell with possibility to run CLI?

You have the choice to run a Windows based cloud shell or a Linux based one. On the Windows one, comes with Azure PowerShell and also Azure CLI (on Bash on Windows). On the Linux one, comes with Bash and Azure CLI. In future, it will also support Azure PowerShell on Linux.

The Azure Cloud Shell comes with preinstalled open source PowerShell in both, Windows- and Linux-based Cloud Shell. I want to clarify that on Windows based cloud shell, Azure CLI runs directly in cmd.exe (Bash on Windows is not enabled in cloud shell yet).

Cheers,

Marcos Nogueira
Azure MVP

azurecentric.com
Twitter: @mdnoga

Create AzureRM VM from existing VHD

While I was helping a costumer creating a Azure RM virtual machine from an existing VHD, I adapt one of my existing scripts, with some search on internet, to improve my script, that I normally used to create Azure RM virtual machines.

In this case, I need to create the VM from an existing VHD on a Storage Account. Usually when you create a new VM, you have to setup the OS type and select the base image.

This script creates a new VM from an image:

$osDiskName = $vmname+’_OS_Disk’

$osDiskCaching = ‘ReadWrite’

$osDiskVhdUri = “https://<STORAGE_ACCOUNT>.blob.core.windows.net/vhds/”+$vmname+”_os.vhd”

 

# Setup OS & Image

$user = “MrAzure”

$password = ‘<PASSWORD>’

$securePassword = ConvertTo-SecureString $password -AsPlainText -Force

$cred = New-Object System.Management.Automation.PSCredential ($user, $securePassword)

$vm = Set-AzureRmVMOperatingSystem -VM $vm -Windows -ComputerName $vmname -Credential $cred

$vm = Set-AzureRmVMSourceImage -VM $vm -PublisherName $AzureImage.PublisherName -Offer $AzureImage.Offer `

    -Skus $AzureImage.Skus -Version $AzureImage.Version

$vm = Set-AzureRmVMOSDisk -VM $vm -VhdUri $osDiskVhdUri -name $osDiskName -CreateOption fromImage-Caching $osDiskCaching 

 

To use the existing disk, you need to replace the above script and use this one

$osDiskName = $vmname+’_OS_Disk’

$osDiskCaching = ‘ReadWrite’

$osDiskVhdUri = “https://<STORAGE_ACCOUNT>.blob.core.windows.net/vhds/”+$vmname+”_os.vhd”

 

$vm = Set-AzureRmVMOSDisk -VM $vm -VhdUri $osDiskVhdUri -name $osDiskName -CreateOption attach -Windows -Caching $osDiskCaching

 

Cheers,

Marcos Nogueira
Azure MVP
azurecentric.com
Twitter: @mdnoga

New version of Azure Backup Server introduces Modern Backup Storage technology

WOW! What a day for me! Microsoft Azure just announces new and improved features on the new version Azure Backup Server. Let’s start!

They announce with Azure Backup Server v2 (MABSv2) you can protect your VMWARE and Windows Server 2016 environment. That is really important feature (in my opinion). Now I don’t need to rely on third party backup software vendors to backup the workloads into Azure, special if the workload is running on ESXi.

Microsoft introduces now thought MABSv2 the Modern Backup Storage technology, based on the features available o Windows Server 2016.

With Windows Server 2016 broth enormous enhancements like ReFS cloning. MABSv2, now leverage these modern technologies, like ReFS cloning and Dynamic VHDX to reduce the overall TCO of backup. MABSv2 also introduces workload infinity technology, that helps to optimize storage utilization.

One of the features that Windows Server 2016 intruded was resilience change checking that improved virtual machine backup resiliency. MABSv2 are now resilient with Resilient Check Technology (RCT) and organizations don’t need to go with painful consistency checks scenarios like virtual machine storage migrations.

MABSv2 can detect and protect VMs that are store on Storage Spaces Direct (S2D), ReFS based cluster seemly without any manual steps. Windows Server 2016 VMs are more secure with shield VM technology and MABSv2 can protect and recovery them securely.

Windows Server 2012 R2 cluster can be upgraded to Windows Server 2016 using rolling cluster upgrade technology without bring down the production environment. MABSv2 will continue to backup the VMs and so there no miss of the SLA while the cluster upgrade is in progress.

You can also auto protect SQL workloads and VMware VMs to cloud using MABSv2. MABSv2, now also comes with the support to protect SQL 2016, SharePoint 2016 and Exchange 2016 workloads as well.

MABSv2 can protect VMware VMs, although the support is in test for MABSv2 running on Windows Server 2016.

MABSv1 deployments can be upgraded to MABSv2 with a few simple steps. MABSv2 will continue to backup data sources without rebooting production servers. So, you don’t need to worry about rebooting production servers or backup interruption with the upgrade to MABSv2.

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

Containers on Azure – Part 2

On the previous post (see here), I talked about the concept of Containers, Azure Container Service and Azure Service Fabric. Now that you know the concept and have an idea how to implement it, let see how can you deploy containers in Azure.

Azure offers several ways to provision Azure virtual machines that support Docker containers:

  • Install the Docker virtual machine extension. You can either add the extension to an existing Azure virtual machine running a Docker-supported distribution of Linux or include it when deploying a new Azure virtual machine via a Resource Manager template or a command-line script. The extension installs the Docker daemon, also called the Docker Engine; the Docker client; and Docker Compose. The Docker daemon is necessary for an Azure virtual machine to function as a Docker host.
    Note: Docker Engine is a lightweight software component that runs as a daemon on a Linux operating system. It provides the environment for running containerized apps.
    The Docker client is management software that allows you to interact with the Docker Engine via a command line, which allows you to create, run, and transfer containers.
    Docker Compose is a utility for building and running Docker apps that consist of multiple containers.
  • Provision a Docker Azure virtual machine available from the Azure Marketplace. Use the Azure portal to deploy a Linux virtual machine to run Docker containerized workloads. During the virtual machine deployment, Azure automatically provisions the Docker virtual machine extension, which installs all the necessary Docker components.
  • Deploy an ACS cluster. This allows you to provision and manage multiple instances of Docker containers residing on clustered Docker hosts.
  • Use the Docker Machine driver to deploy an Azure virtual machine with support for Docker containers. Docker Machine is a command-line utility that allows you to perform several Docker-related administrative tasks, including provisioning new Docker hosts. The utility includes support for deploying Docker hosts on-premises and on Azure virtual machines. You must include the ‑driver azure parameter when running the docker-machine create For example, the following command deploys a new Azure virtual machine named dockerazurevm1 in an Azure subscription. You specify the subscription ID, create an administrative user account named mrdocker, and enable connectivity on TCP port 80.

docker-machine create -d azure \
  –azure-ssh-user mrdocker \
  –azure-subscription-id your_Azure_subscription_ID \
  –azure-open-port 80 \
  dockerazurevm1

With the default settings, the virtual machine has the size Standard_A2 and resides on an Azure virtual network named docker-machine in a docker-machine resource group in the West US region. A default network security group associated with the virtual machine network interface allows inbound connectivity on TCP port 22 for Secure Shell connections and on TCP port 2376 for remote connections from the Docker client. The command also generates self-signed certificates that help to secure subsequent communication from the computer where you ran Docker Machine and store the corresponding private key in your user’s account profile.

For the full syntax of the docker-machine create -d azure command, refer to Microsoft Azure.

Docker Machine is available on Windows, Linux, and Mac OS X operating systems. For installation instructions and links to download locations, refer to Install Docker Machine.

Containers on a Azure Virtual Machine

Choosing the most convenient and efficient way to run containers in your environment depends on the location of Docker hosts. Docker Machine allows for managing an on-premises and Azure-based Docker host in a consistent manner.

During Azure virtual machine provisioning, Docker Machine generates the self-signed certificate that you can use to establish a Secure Shell session to the Docker host. It also stores the certificate’s private key in your user account profile. This allows you to continue managing the Azure virtual machine from the same computer on which you initiated the virtual machine provisioning. To simplify management, you should also configure environment variables within your Windows command shell. To identify the environment variables to configure, run the following at the command prompt, where dockerazurevm1 is the name of the Azure virtual machine you deployed by running the docker-machine create command.

docker-machine env dockerazurevm1

This should return output similar to the following.

SET DOCKER_TLS_VERIFY=”1”
SET DOCKER_HOST=”tcp://191.237.46.90:2376”
SET DOCKER_CERT_PATH=”C:\Users\Admin\.docker\dockerazurevm1\certs”
SET DOCKER_MACHINE_NAME=”dockerazurevm1”
@FOR /f “tokens=*” %i IN (‘docker-machine env dockervm1) DO @%i

You can now start a container on the Azure virtual machine by running the following command.

docker run -d -p 80:80 –restart=always container_name

This automatically locates the container named container_name, publishes it on port 80, initiates its execution in the detached mode, and ensures that the container always restarts after it terminates, regardless of the exit status. With the detached mode, the console session is not attached to the container process, so you can use it to continue managing the Docker host. In the attached mode, the console session displays the standard input, output, and error from the Docker container.

For the full syntax of the docker run command, refer to Docker Run.

The docker run command attempts to locate the latest version of the container locally on the Docker host. By default, it checks the version against the Docker Hub. This is a central, Docker-managed repository of Docker images available publicly via the Internet. If there is no locally cached container image or its version is out-of-date, the Docker daemon automatically downloads the latest version from the Docker Hub.

You can set up a private Docker Registry to maintain your own collection of container images. A private Docker Registry runs as a container based on the registry image available from the Docker Hub. You can store your private images in an Azure storage account.

To set up a private Docker Registry, follow this procedure:

  1. Create an Azure storage account.
  2. Start a registry container on a Docker host by running the following command.
    docker run -d -p 5000:5000 -e REGISTRY_STORAGE=azure -e REGISTRY_STORAGE_AZURE_ACCOUNTNAME=”storage-account_name” -e REGISTRY_STORAGE_AZURE_ACCOUNTKEY=”storage-account-key” -e REGISTRY_STORAGE_AZURE_CONTAINER=”registry”
    –name=registry registry:2

    In the preceding command, storage_account_name and storage_account_key represent the name and one of the two keys of the Azure storage account you created in the previous step. This provisions a new registry container and makes it accessible via TCP port 5000.
    Note: To allow inbound connections on the port that you specified when executing the docker run command, be sure to update the network security group associated with the Docker Azure VM network interface where the registry container is running.

  3. Build a new Docker image by running the docker build command, or pull an existing image from Docker Hub by running the docker pull command.
  4. Use the following docker tag command to associate the image you created or downloaded in the previous step with the private registry.
    docker tag hello-world localhost:5000/image_name

    In the preceding command, image_name represents the name of the image. This tags the image and designates a new repository in your private registry.
  5. To upload the newly tagged image to the private Docker Registry, run the following command.
    docker push localhost:5000/image_name

    This pushes the image into the private registry.

  6. To download the image from the private registry, run the following command.
    docker pull localhost:5000/image_name

    If you run the docker run command, the Docker daemon uses the newly downloaded image from your private registry.

 

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

 

Containers on Azure – Part 1

In the last decade, hardware virtualization has drastically changed the IT landscape. One of many consequences of this trend is the emergence of cloud computing. However, a more recent virtualization approach promises to bring even more significant changes to the way you develop, deploy, and manage compute workloads. This approach is based on the concept of containers.

These series of posts explain containers and the ways you can implement them in Azure and in your on-premises datacenter.

When this concept was introduced on the Microsoft world, it was somehow difficult to me, to understand the all concept and how I will use it. So, my propose here is to easy the path and explain how containers works and the ways you can implement them in Azure and in your on-premises datacenter. The goal is to facilitate the path to deploy clusters of containerized workloads by using Azure Container Services (ACS).

Azure Service Fabric offers an innovative way to design and provision applications by dividing them into small, independently operating components called microservices. Containers and microservices are complementary technologies. By combining them, you can further increase the density of your workloads, optimize resource usage, and minimize cost.

What is Containers?

On a very simplistic way, containers are the next stage in virtualizing computing resources. Hardware virtualization freed people to a large extent from the constraints imposed by physical hardware. It enabled running multiple isolated instances of operating systems concurrently on the same physical hardware. Container-based virtualization virtualizes the operating system, allowing you to run multiple applications within the same operating system instance while maintaining isolation. Containers within a virtual machine provide functionality similar to that of virtual machines on a physical server. To better understand this analogy, this topic compares virtual machines with containers.

The following table lists the high-level differences between virtual machines and containers.

Feature Virtual machines Containers
Isolation mechanism Built in to the hypervisor Relies on operating system support.
Required amount of memory Includes operating system and app requirements Includes containerized apps requirements only.
Startup time Includes operating system boot, start of services, apps, and app dependencies Includes only start of apps and app dependencies. The operating system is already running.
Portability Portable, but the image is larger because it includes the operating system More portable, because the image includes only apps and their dependencies.
Image automation Depends on the operating system and apps Based on the Docker registry (for Docker images).

To better understand the difference between Virtual Machines and Containers, I highly suggest reading the following article, refer to Virtual Machines and Containers in Azure

Compared with virtual machines, containers offer several benefits, including:

  • Increased speed with which you can develop and share application code.
  • An improved testing lifecycle for applications.
  • An improved deployment process for applications.
  • The increased density of your workloads, resulting in improved resource utilization.

The most popular containerization technology is available from Docker. Docker uses Linux built-in support for containers. Windows Server 2016 includes a container feature that deliver equivalent functionality in the Windows Server operating system.

Azure Container Service

ACS allows you to administer clusters of multiple Docker hosts running containerized apps. ACS manages the provisioning of cloud infrastructure components, including Azure virtual machines and virtual machine scale sets, Azure storage, virtual networks, and load balancers. Additionally, it provides the management and scaling of containerized apps to tens of thousands of containers via integration with the following two orchestration engines:

  • The Mesosphere Datacenter Operating System (DC/OS). A distributed operating system provided by the Apache Software Foundation.
  • Docker Swarm. Clustering software provided by Docker.

Based on this integration, you can manage ACS clusters on the DC/OS or the Docker Swarm platform by relying on the same tools you use to manage your existing containerized workflows.

You can provision an ACS cluster directly from the Azure portal. Alternatively, you can use the Azure Resource Manager template or Azure command-line interface. During provisioning, you choose either DC/OS or Docker Swarm as the framework configuration. Subsequent configuration and management specifics depend mainly on this choice. Although both orchestration engines fully support Docker-formatted containers and Linux-based container isolation, they have architectural and functional differences, including:

  • DC/OS contains a Master availability set, public agent virtual machine scale set, and private agent virtual machine scale set, with fault-tolerant master/subordinate instances replicated by using Apache ZooKeeper. Docker Swarm contains a Master availability set and the agent virtual machine scale set.
  • DC/OS includes by default the Marathon orchestration platform, which manages the cluster-wide scheduling of containerized workloads. It supports multiple-resource scheduling that takes memory, CPU, disks, and ports into consideration.
  • With Docker Swarm, you can use the Docker command-line interface or the standard Docker application programming interface (API). DC/OS offers the REST API for interacting with its orchestration platform.

Azure Service Fabric

Azure Service Fabric is a cloud-based platform for developing, provisioning, and managing distributed, highly scalable, and highly available services and applications. Its capabilities result from dividing the functionality provided by these services and applications into individual components called microservices. Common examples of such microservices include the shopping carts or user profiles of commercial websites and the queues, gateways, and caches that provide infrastructure services. Multiple instances of these microservices run concurrently on a cluster of Azure virtual machines.

This approach might sound similar to building multitier applications by using Azure Cloud Services, which allows you to independently scale web and worker tiers. However, Azure Service Fabric operates on a much more granular level, as the term microservices suggests. This allows for much more efficient resource utilization while scaling to potentially thousands of virtual machines. Additionally, it allows developers to introduce gradual changes in the code of individual application components without having to upgrade the entire application.

Another feature that distinguishes Azure Service Fabric from traditional Platform as a Service (PaaS) services is support for both stateless and stateful components. Azure Cloud Services are stateless by design. To save state information, they have to rely on other Azure services, such as Azure Storage or Azure SQL Database. Azure Service Fabric, on the other hand, offers built-in support for maintaining state information. This minimizes or even eliminates the need for a back-end storage tier. It also decreases the latency when accessing application data.

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga