Transform the Datacenter – Part 4 – Extend to the cloud

This post is a continuation of series of posts about Transform your datacenter. You can see the previous post:

Extend into the cloud to scale on demand while keeping costs low and the complexity of the solution simple is the major challenge when you think to extend your datacenter to the cloud. You can meet unexpected needs or plan ahead for times when your business needs to run at peak demand.

One of the first thing that comes to my mind, when I’m talking to extend the datacenter to the cloud is security and how I can trust on the cloud provider.

Microsoft Azure security and trust

From all the public cloud providers, as per mu knowledge, Microsoft Azure is the one that have more security and clearance certifications. With the recent new division that Microsoft created (Cybercrime), Security is one of the top priorities. Just look how much they invest in security. For me that give me a boost on my confidence regarding my infrastructure.

These are the topics that concern me the most and I always look for on a public cloud provider:

Security

  • Secure development, operations, and threat mitigation practices provide a trusted foundation
  • Decades of experience building enterprise software & operating online services around the globe
  • Physical and platform security measures including access control, encryption, and network safeguards
  • Defense-in-depth and penetration testing help protect against cyber threats

 Privacy

  • Unmatched legal commitments govern data privacy, access and use
  • First to offer privacy protections via Data Processing Agreements, EU Model Clauses, and HIPAA BAA
  • No mining of customer data for advertising or other purposes
  • Customers control where their data resides and who has access to it

Compliance

  • Independent audits demonstrate compliance with regulatory standards
  • Certified for ISO, SSAE 16/SOC 1, and SOC 2 compliance, plus a range of industry and country specific security standards
  • Shares audit report findings and compliance packages with customers

 

Make hybrid capabilities part of your infrastructure

When we talk about the concrete benefits of hybrid, there are a number of areas to consider. IT Professionals often ask, “Where should I start,” so I’ve picked some of the places where you can most easily take advantage of cloud resources as an extension of your existing datacenter.

Microsoft Azure Infrastructure as a Service

That is one of the most truly reason of why Microsoft Azure is your hybrid cloud solution, if you compare to other public cloud providers. The consistent VM format between Hyper-V and Azure IaaS (for example), makes it easy to move existing applications to the cloud.

Scalable on demand and enterprise ready are other points to take in consideration, but if you compare with the other public cloud providers. The different is almost nothing.

Service provider cloud options

The Cloud OS Network is a worldwide group of select Service Providers that partner closely with Microsoft to offer organizations cloud solutions on the Microsoft Cloud Platform (Hyper-V, System Center, Windows Azure Pack) and Azure enabled solutions.

Cloud OS Network members are uniquely combining geographically affinity (Data Sovereignty, local Datacenters), value added services (Customer centric solutions, Azure enabled scenarios) and customer reach/relationship. Microsoft have now 100+ partners in the network to serve your specific needs, covering over 600 local datacenters and serving over 3.7mio customers.
Find a COSN partner in your region or join the conversation on Twitter @CloudOSNetwork.

Hybrid Cloud storage

Be able to reduce storage costs and manage data growth. Improve your data protection and recovery or even increase agility and shift resources to business drivers is some of the features of Microsoft Storsimple solution.

Business continuity

The goals, that Microsoft have, when building a cloud-based disaster recovery solution were to make disaster recovery available to everyone, available everywhere, and easy to use. That is where Azure Site Recovery is coming to place.

To have more information about Azure Site Recovery, see this post.

Hybrid networking

Azure ExpressRoute enables you to create private connections between Azure datacenters and infrastructure that’s on your premises or in a colocation environment. ExpressRoute connections do not go over the public Internet, and offer more reliability, faster speeds, lower latencies and higher security than typical connections over the internet. In some cases, using ExpressRoute connections to transfer data between on-premises and Azure can also yield significant cost benefits.

Express Route: connect directly to Azure from your datacenter, without going through the public internet.

With ExpressRoute, you can establish connections to Azure at an ExpressRoute location (Exchange Provider facility) or directly connect to Azure from your existing WAN network (such as a MPLS VPN) provided by a network service provider.

Hybrid identity

For identity and access, the breakthrough is an increased ability to maintain a single identity across multiple clouds. Continuous services and connected devices present a real challenge, with users expecting more and more from IT in terms of simple and fast access to resources and data. Microsoft offers multiple options in this area, including the advances in identity management in both Windows Server 2012 Active Directory and Microsoft Azure Active Directory. Cloud-based identity that integrates with your existing Active Directory solution will allow tremendous flexibility in building single sign-on capabilities across your cloud deployments. This is the identity platform you know, reinvented for cloud.

Microsoft is differentiated in this area by our ability to bridge from the on-premises datacenter to the cloud.  We understand that you need to balance security and compliance against ease of access for end users.  And we continue to innovate to make things easier—for example, the most recent updates to Azure Active Directory make it possible to federate identity across SaaS applications, such as Salesforce.com

 

Transform the Datacenter – Part 3 – Automate and Secure

I’ve talked about the infrastructure fabric, and how there are enhancements available in Windows Server 2012 R2 on the previous blog post (see here). But how do you think about the services that run on top of that infrastructure? How do you ensure that you’re managing effectively and building security into your processes?

Build and operate

Building the right platform is only half of what you need to do to support the changing needs of your business. The next big piece is operations (operations fundamental to Software-defined datacenter (SDDC), not an add-on layer on top)—the way that you bring new resources in, deliver applications and services to the business, meet demanding SLAs, and ensure that you’re meeting requirements for security and compliance.

Going back to the learnings from cloud (see here), you need to bring standardization and automation to core processes. But you also need to rethink security, because the pooling of resources in a private cloud model creates issues of access control. For on-demand self-service, you must think about who has the ability to demand, provision, use or request services. In this post, we’ll talk more about identity as part of your security strategy. The critical thing to note here is that as you transform the datacenter to take advantage of innovation, security should be part of the picture. The security features in Windows Server 2012 R2 are second to none

Beyond security, how can you be sure that you are operating your infrastructure efficiently, without wasting time or resources? The solution is a unified approach to management.

Unified management

Unified management means a single approach and a single console that lets you provision, deploy, monitor, and manage. It’s a key part of a “cloud operations” or “software-defined datacenter” approach: management is the intelligence within the system.

For provisioning, you’re looking for an approach that lets you avoid repetitive processes—so that you can deploy servers and applications rapidly and without errors.

You also want a robust set of tools for monitoring and management. Modern applications are often highly distributed and management means tools that can take into account the stack from the metal up.

And you want the right tools for service delivery, with a consistent experience across clouds.

Provisioning

Infrastructure

With the right processes and technology in place, you can take a new server, rack it, attach a network cable, and within 15 minutes have it be part of the running infrastructure as the system:

  • Discovers
  • Interrogates
  • Deploys the OS
  • Configures the server as part of its workload role

 Applications

For stable, resilient, reliable workloads, you need to focus on the creation of templates for application provisioning. You can then automate triggers for scale up or scale down.

By using templates and automating repetitive processes, you can increase speed without introducing risk into the system.

Management and monitoring

You need to monitor from the hardware up to the application—to allow you to determine where a problem really exists. You should know whether it’s an issue with the SAN or an issue with the server or an issue with the database.

System Center 2012 R2 lets you monitor your on-premises datacenter and also monitor the health of your subscription services in Azure.

Microsoft offers distributed application performance monitoring, so you can verify the health, performance, and availability of applications in a hybrid environment.

Intelligent monitoring means that in a private datacenter, you’ll monitor all the way down the stack, in Azure you would only monitor the service.

Service delivery

The final key element when thinking about the software-defined datacenter is service delivery. How do you get your users the resources they need—whether they are developers or LOB application owners? A great approach to service delivery should allow you add in cloud resources to your infrastructure in a hybrid model, so that you’re automatically drawing capacity. A strong service delivery model also allows you to pre-approve compute/storage/network resources to designated users, so that authorization isn’t a roadblock.

Microsoft Azure Pack provides a multi-tenant, self-service cloud that works on top of your existing software and hardware investments. Building on the familiar foundation of Windows Server and System Center, Microsoft Azure Pack offers a flexible and familiar solution that your business can take advantage of to deliver self-service provisioning and management of infrastructure—Infrastructure as a service (Iaas), and application services—Platform as a Service (PaaS), such as Web Sites and Virtual Machines.

With the Azure Pack, you can standardize IT service offerings, empowering users to directly identify, access, and request applications and services published through a centralized configuration management database. Provide a self-service portal with a provisioning and delegation framework, along with chargeback and compliance management and reporting capabilities.

 Microsoft Cloud Platform System

The Cloud Platform System is a revolutionary new product designed specifically to reduce the complexity and risk of implementing a hybrid cloud—and to get you up and running fast. This appliance includes both the hardware and the software you need to create the agile datacenter of the future—specifically, Windows Server 2012 R2, Microsoft System Center 2012 R2, and Microsoft Azure technologies. Preconfigured hardware and software working together speeds your ability to offer customers the infrastructure as a service (IaaS) and platform as a service (PaaS) resources they want, whether that means self-provisioned virtual machines, web applications, or other resources. You simply choose the configuration you want.

Transform the Datacenter – Part 2 – Software-defined Datacenter

This post is a continuation of series of posts about Transform your datacenter. You can see the previous post here.

So, the first pillar to transform your datacenter is Software-defined Datacenter.

What is Software-defined Datacenter?

“Software-defined” has become an industry term—but what does it really mean for us? With a software-defined datacenter, you gain the ability to manage diverse hardware as a unified resource pool. You get greater flexibility and more resilience. That’s the big thing we’re learning from cloud—that to respond rapidly to the demands of the business, you must move away from a highly-customized infrastructure to a standardized, automated infrastructure.

To achieve that you need to rethink the datacenter. On a cloud perspective, all the abstraction layer is resume to three resource that are important to manage and is what you need to build the resource pools that you will allocate to each cloud:

  • Compute
  • Networking
  • Storage

Reimagine compute

Microsoft is once again a leader in the Gartner x86 Virtualization Magic Quadrant. They’re driving innovation in compute with industry-leading scale and performance:

Scale to your largest workloads with 64 virtual processors per VM and 1TB memory per VM

Drive up your consolidation ratio with 320 logical processors per host, 4TB physical memory per host and 1,024 VMs per host

Increase scale per cluster with 64 physical nodes per cluster, 8,000 VMs
per cluster

Zero-downtime migrations: Since Windows Server 2012 R2, live migration just gets better.  Live migration is a critical aspect of the software-defined datacenter, because you need the flexibility of moving virtual machines between physical servers with zero downtime. In the latest release of Windows Server, they’ve made it easier to move large numbers of virtual machines—for dynamic load balancing, for example—with the same speed that you expect when moving a single virtual machine.

Open-source integration: Directly in Hyper-V, Microsoft have built features to enable live backups for Linux guests, and we have exhaustively tested to ensure that Hyper-V features, like live migration, work for Linux guests just like they do for Windows guests. Since Windows Server 2012 R2, Microsoft engineering teams worked across the board to ensure Linux is at its best on Hyper-V.

Infrastructure for hardware-based security: Windows Server also includes multiple features to make it easier to secure data and restrict access.

VIrtualization is the foundation of the software-defined datacenter. Microsoft offers enterprise-grade features and ongoing innovation to allow you to create a flexible, resilient infrastructure fabric.

 Reimagine networking

When we look at datacenter transformation, networking is an area with huge potential. Today’s networks can be rigid, meaning that they make it difficult to move workloads within the infrastructure, and network operations involve high levels of manual processes.

As a result, one of the biggest trends today is software-defined networking (SDN). What exactly does that mean?

A big part of SDN is network virtualization, a capability that Microsoft offer today in Windows Server 2012. Network virtualization does for the network what server virtualization did for compute. It allows you to use software to manage a diverse set of hardware as a single, elastic resource pool. If you then add in additional management capabilities through software, you get a very flexible approach.

And the benefits are very similar for networking. With compute capacity, we see with the private cloud model how virtualization gives you increased flexibility in moving workloads and allocating capacity. You get greater efficiency when you have this increased ability to balance the load across your existing resources.

With Windows Server 2012 R2 and System Center 2012 R2 Virtual Machine Manager, your network becomes a pooled resource that can be defined by software, managed centrally through automation, and extended beyond your datacenter.

Networking today is complicated because the underlying physical network hardware such as ports, switches, and routers tends to require manual configuration. Network operations are often complex since the management interfaces to configure and provision network devices tend to be proprietary; in many cases, network configuration needs to happen on a per-device basis, making it difficult to maintain an end-to-end operational view of your network.

With a virtualized network infrastructure, you can control the building of the network, configuration, and traffic-routing using software. You can manage your network infrastructure as a unified whole, and that allows you to do three very important things: you can isolate what you need to isolate, you can move what you need to move, and you can build connections between your datacenter and cloud resources.

Isolate:

So, let’s first talk about isolation. We’ve talked a lot about the importance of a unified resource pool, but there are many reasons why you might want to create divisions or partitions within that pool. For example, you might want to separate individual departments. As companies increasingly rely on central datacenters to support global operations, you might also want to separate geographical regions. Today, some companies create separate areas for physical servers, designated to particular geographically, within the datacenter. But that isn’t a very efficient usage model, and it doesn’t give you many options if that set of servers experiences problems. With network virtualization, or software-defined networking, you can create boundaries within the datacenter to enable multi-tenancy and keep workloads isolated from each other without placing them in separate hardware pools.

What else can you do with a virtualized network infrastructure?

Move:

In the past, individual workloads were pretty tightly coupled to the underlying physical network infrastructure. That meant that moving workloads within the datacenter required extensive manual reconfiguration. Network virtualization lets you move workloads even from one datacenter to another because the control plane for the network is all handled through software. Microsoft have several features in Windows Server 2012 and Windows Server 2012 R2 that combine to make that process even easier.

Connect to clouds:

And finally, software-defined networking lets you connect easily to clouds outside your datacenter.  It allows you to treat cloud resources as an extension of your own infrastructure—so in a way, you could say that SDN and network virtualization are the keys to hybrid cloud.  That’s why Microsoft continue to invest so heavily in this area.

Reimagine storage

Organizations continue to face storage pain. Although storage cost-per-TB continues to fall, the demand for storage is growing much faster—35 to 50+percent annually. This only increases the pressure on business as storage spend outpaces server spend—or IT budgets—and introduces a chain of costs, and more pain.

Microsoft want to provide customers with ways to improve their storage infrastructure and how they leverage it, regardless of their current storage environment. We need to create strategic options using cloud technologies that lead to lasting solutions.

Those include continuing with traditional storage investments and optimizing them, or integrating new options for next-generation on-premises (private cloud) or hybrid cloud storage.

Many Organizations have existing storage and technology investments that they wish to maintain—example: direct-attached storage, storage area networks (SAN), network-attached storage (NAS), and data protection infrastructure.

But you don’t need a SAN for every purpose.

Cost-effective storage for private clouds

Today most organizations have virtualized compute. As we discussed earlier, whatever your hypervisor, live migration is a key capability and historically, organizations have used a SAN to support that. But Microsoft have introduced new options for customers’ primary storage:
providing the performance and availability required to use file storage as a back end for virtualization workloads.

This is possible through a set of technologies, including:

  • SMB 3 protocol updates that improve network file share performance
  • New load-balanced active-active file server clusters (Scale-out File Server (SoFS))
  • Introducing SMB Transparent Failover for the clusters so that the servers relying on them can run uninterrupted, even in the event that a node fails

Employing and managing file-based storage will be very cost-effective for many private cloud deployments. Data protection management solution provides support for shared-nothing live migration in these environments. VMs can move freely, remaining protected while using resources efficiently.