Quantcast
Channel: Windows Virtualization Team Blog
Viewing all 220 articles
Browse latest View live

Hyper-V Replica - Name Resolution of Internationalized Server/Domain names

$
0
0

In a mixed language environment where the server name or domain name contains international characters, you might encounter an error at the time of enabling replication. The event viewer messages will tell you that “Hyper-V failed to enable replication for virtual machine” and “The server name or address could not be resolved (0x00002EE7)”. The problem could seem a little perplexing because pinging the same FQDN might work just fine. The problem occurs because of Hyper-V Replica’s dependency on HTTP.

To work around the issue, an exception rule needs to be added to the primary server’s name resolution policies. Follow these steps to create the rule:

  1. Open the Local Group Policy Editor (Gpedit.msc).
  2. Under Local Computer Policy, expand Computer Configuration, Windows Settings, and then click Name Resolution Policy.
  3. In the Create Rules area, click FQDN, and then enter the Replica server FQDN that was failing.
  4. On the Encoding tab, select the Enable Encoding check box, and make sure that UTF-8 with Mapping is selected.
  5. Click Create.

    The rule appears in the Name Resolution Policy Table.

  6. Click Apply, and then close the Local Group Policy Editor.
  7. From an elevated command prompt, run the command gpupdate to update the policy.

Local Policy Group Editor 2


Hyper-V Replica Certificate Based Authentication - makecert

$
0
0

We have had a number of queries on how to enable replication using certificates created from makecert. Though the Understanding and Troubleshooting guide for Hyper-V Replica discusses this aspect, I am posting a separate article on this. The below steps are applicable for a simple lab deployment consisting of two standalone servers – PrimaryServer.domain.com and ReplicaServer.domain.com. This can be easily extended to clustered deployments with the Hyper-V Replica Broker.

Makecert is a certificate creation tool which generates certificates for testing purpose. Information on makecert is available here - http://msdn.microsoft.com/en-us/library/bfsktky3.aspx.

1. Copy the makecert.exe tool to your primary server

2. Run the following command from an elevated command prompt, on the primary server. This command creates a self-signed root authority certificate. The command also installs a test certificate in the root store of the local machine and is saved as a file locally

makecert -pe -n "CN=MyTestRootCA" -ss root -sr LocalMachine -sky signature -r "MyTestRootCA.cer"

3. Run the following command couple of times, from an elevated command prompt to create new certificate(s) signed by the test root authority certificate

makecert -pe -n "CN=<FQDN>" -ss my -sr LocalMachine -sky exchange -eku 1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2 -in"MyTestRootCA" -is root -ir LocalMachine -sp "Microsoft RSA SChannel Cryptographic Provider" -sy 12 <MachineName>.cer 

Each time:

  • Replace <FQDN> with FQDN of primary and replica servers
  • Replace <MachineName>.cer with any name

The command installs a test certificate in the Personal store of the local machine and is saved as a file locally. The certificate can be used for both Client and Server authentication

4. The certificates can be viewed by mmc->File->Add/Remove Snap in…->Certificates->Add->”Computer Account”->Next->Finish->Ok

You will find the Personal certificate (with the machine names) and the Root certificate (MyTestRootCA) in the highlighted folders:

clip_image002

5. Export the replica server certificate with the private key.

image

imageimage

6. Copy MyTestRootCA.cer and the above exported certificate (RecoveryServer.pfx) to the Replica server.

7. Run the following command from an elevated prompt in ReplicaServer.domain.com

certutil -addstore -f Root "MyTestRootCA.cer"

8. Open the certificate mmc in ReplicaServer.domain.com and import the certificate (RecoveryServer.pfx) in the Personal store of the server. Provide the pfx file and password as input:

image

9. By default, a certificate revocation check is mandatory and Self-Signed Certificates don’t support Revocation checks. To work around it, modify the following registry key on Primary, Replica Servers

reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Replication" /v DisableCertRevocationCheck /d 1 /t REG_DWORD /f

Updates for Hyper-V

$
0
0

A new article has appeared on the TechNet Wiki that provides a complete list of updates for Hyper-V on Windows Server 2012:

http://social.technet.microsoft.com/wiki/contents/articles/15576.hyper-v-update-list-for-windows-server-2012.aspx

I am really happy to see this online, as we have received lots of positive feedback about how useful the existing documentation is for updates that are available for Hyper-V on Windows Server 2008 R2 and on Windows Server 2008.  Hopefully this will help everyone in planning and managing their Hyper-V deployments.

Cheers,
Ben

How to install integration services when the virtual machine is not running

$
0
0

We’ve been talking to a lot of people about deploying integration services (integration components) lately.  As it turns out, they’re pretty easy to patch offline with existing Hyper-V tools.

First, why would you update integration services on a not-running (offline) VM?

Offline VM servicing is valuable for VM templates places that create new VMs frequently since it allows you to keep VM templates up-to-date.  While this post targets exclusively integration service updates, the same update approach applies to many updates as well as any configurations specific to the environment.  Keeping the VM images fully up to date and configured before they are deployed saves significant setup time and support every time a new VM is created.

Here is a detailed write-up about deploying and updating integration services on an offline VM – both VHD/VHDX – using out of box PowerShell tools and a cab (cabinet) file that comes bundled with Server 2008 or later Hyper-V hosts.

Before you start, open a PowerShell console as administrator.  Make sure Hyper-V is installed and you’re working from the management (host) OS. The management OS must be Server 2008R2 or newer and more recent than the VM OS you’re patching.  I tested this script with a Server 2012 host.

For default Hyper-V installs, the CAB containing the up-to-date integration components will be located in [HostDrive]:\windows\vmguest\support.  From there, choose your architecture; for my machine, I choose amd64.  X86 people, you’re files are there too.  This file contains all of the components built into the VM Guest ISO.  To update integration components offline, we’re only interested in the two cab files highlighted below.

There will be two files:

  • Windows6.x-HyperVIntegrationServices-x64.cab corresponds with Windows 7 and earlier guests. Tested: Server 2008 R2, Windows 7 (enterprise and enterprise sp1).
  • Windows6.2-HyperVIntegrationServices-x64.cab corresponds with windows 8 and Server 2012 guests. Tested: Server 2012, Windows 8.

Note: this process only works for Windows 2008R2 / Windows 7 and later operating systems. It works with both vhd and vhdx files.

You will need the path to this file.  From here on out I’ll refer to it as $integrationServicesCabPath.  If you pick the wrong one it will fail a version check without harming the guest.

$integrationServicesCabPath="C:\Windows\vmguest\support\amd64\Windows6.2-HyperVIntegrationServices-x64.cab"


The next step is to apply the cab to the offline VM.

First, you’ll need the path to your VM image, I’m going to refer to this as $virtualHardDiskToUpdate.

$virtualHardDiskToUpdate="D:\client_professional_en-us_vl.vhd"

Next, mount the image as a pass-through disk (unprotected data and direct I/O) and keep track of the disk number returned.

$diskNo=(Mount-VHD -Path $ virtualHardDiskToUpdate –Passthru).DiskNumber

Check to make sure the operational status is online and find the drive letter so we know which drive to patch.

(Get-Disk $diskNo).OperationalStatus

$driveLetter=(Get-Disk $diskNo | Get-Partition | Get-Volume).DriveLetter

In my case this returned “online” and “E” (stored in $driveLetter).  If you mounted a VM with Windows fully installed, the chances are good it’ll mount more than one drive depending on the VMs particular setup.  If this is the case, find the drive with all of the Windows OS files – apply the integration service update to that one.  If the status returns not online prepare the image by running:

Set-Disk $diskNo -IsOffline:$false -IsReadOnly:$false

Now the mounted VHD is ready to be patched.

Add-WindowsPackage -PackagePath $integrationServicesCabPath -Path ($driveLetter+":\")

You should see a blue progress bar at the top of PowerShell.  Enjoy watching the little yellow o’s fill your screen.  If the cab couldn’t apply, make sure you’re using the right version for your guest OS and the right drive.  If the VM is running, it will not patch.

Finally, dismount the VHD.  Notice you dismount using the image path and not the mounted drive letter.

Dismount-VHD -Path $virtualHardDiskToUpdate


Here’s a more reasonable, consolidated, PowerShell Script:

$virtualHardDiskToUpdate="D:\client_professional_en-us_vl.vhd"
$integrationServicesCabPath="C:\Windows\vmguest\support\amd64\Windows6.2-HyperVIntegrationServices-x64.cab"

#Mount the VHD
$diskNo=(Mount-VHD -Path $virtualHardDiskToUpdate–Passthru).DiskNumber

#Get the driver letter associated with the mounted VHD, note this assumes it only has one partition if there are more use the one with OS bits
$driveLetter=(Get-Disk $diskNo Get-Partition).DriveLetter

#Check to see if the disk is online if it is not online it
if ((Get-Disk $diskNo).OperationalStatus -ne 'Online')
{Set-Disk $MountedVHD.Number -IsOffline:$false -IsReadOnly:$false}

#Install the patch
Add-WindowsPackage-PackagePath $integrationServicesCabPath -Path ($driveLetter ":\")

#Dismount the VHD
Dismount-VHD-Path $virtualHardDiskToUpdate

Thank you to Taylor Brown (http://blogs.msdn.com/b/taylorb), for the script!

Cheers,
Sarah Cooley

Resynchronization of virtual machines in Hyper-V Replica

$
0
0

What is resynchronization and why is it needed?

Hyper-V Replica provides protection to VMs by tracking and replicating changes to the virtual hard disks (VHDs) of the VM. Hyper-V Replica runs 24 hours, 365 days in a year; for any VM that has been enabled for replication it ensures that the data on the primary site and the Replica site are kept as closely in sync as supported.

To begin with, Hyper-V Replica (HVR) requires that the data on the virtual hard disks (VHDs) of the primary and replica VMs be the same. This is achieved through the process of initial replication, and establishes a baseline on which replicated changes can be applied. However, due to factors beyond the control of the administrator – such as faulty hardware and OS bugchecks – it is possible that the primary and Replica VMs are not in sync.

Thus in a rainy day scenario (details in following section), when HVR determines that the replica VM can no longer be kept in sync with the primary by applying the replicated changes then resynchronization is required. Resynchronization (or Resync) is the process of re-establishing the baseline – by ensuring that the primary and replica VHDs have exactly the same data stored.

(NOTE: In this post we will use a VM named “RESYNC VM” in all examples and screenshots.)

 

 

When does resynchronization happen?

It would become quite obvious after going through this table below that Resync is not expected to occur regularly. In fact, in the normal course of replication this is quite a rare event. The VM enters the “Resynchronization Required” state when any one of the conditions are encountered:

Site

Condition

Scenario example

Primary

Modify VHD when VM is turned off

Mount/modify VHD outside the VM, Edit disk, Offline patching

Primary

Size of tracking log files > 50% of total VHD size for a VM

Network outage causes logs to accumulate

Primary

Write failure to tracking log file

VHD and logs are on SMB and connectivity to the SMB storage is flaky.

Primary

Tracking log file is not closed gracefully

Host crash with primary VM running. Applicable to VMs in a cluster also.

Primary

Reverting the volume to an older point in time

Reverting the VM to an older snapshot

Volume/snapshot backup and restore

Secondary

Out-of-sequence or Invalid log file is applied

Restoring a backed-up copy of the Replica VM

Importing an older VM copy, when migration by using export-import

Reverting volume to an older point in time using Volume backup and restore.

Reverting the VM to an older snapshot

  

When the VM enters the “Resynchronization Required” state, the replication health becomes “Critical” and the VM is scheduled for resynchronization. At the same time, HVR stops tracking the guest writes for the VM and nothing is replicated.

The replication health will also show this message:

resync 002

 

 

 

Initiating and scheduling resynchronization

Depending on the VM setting, the user might have to trigger the resynchronization operation explicitly. When that is required, follow the instructions as given in the replication health screen:

  1. Right-click on the VM for the options
  2. Under Replication, select the Resume Replication option

You will be presented with the screen to schedule the resynchronization operation:

resync 003

To start the resync operation from PowerShell, use the Resume-VMReplication commandlet:

Resume-VMReplication –VMName “RESYNC VM” -Resynchronize –ResynchronizeStartTime “04/15/2013 12:00:00”

 

User-initiated resynchronization is also possible, but unless absolutely necessary it should be avoided. In order to explicitly force resynchronization on a VM that is not in the “Resynchronization Required” state, first suspend the replication and then initiate resync:

Suspend-VMReplication -VMName "RESYNC VM"
Resume-VMReplication -VMName "RESYNC VM" -Resynchronize

 

The scheduling of the resynchronization operation can be configured for each VM:

  1. On the primary site, open the Hyper-V Manager
  2. Right-click on the desired VM, and select the Settings… option
  3. In the left hand pane under Replication, select the Resynchronization option

resync 006

The default option is to schedule the resynchronization operation during off-peak hours. The resource intensive nature of the operation makes such scheduling useful, and aims to reduce the impact on running VMs.

The same can be configured in PowerShell using the Set-VMReplication commandlet:

# Manual resync
Set-VMReplication -VMName "RESYNC VM" -AutoResynchronizeEnabled 0
 
# Automatic resync
Set-VMReplication –VMName "RESYNC VM" -AutoResynchronizeEnabled 1 -AutoResynchronizeIntervalStart 00:00:00 -AutoResynchronizeIntervalEnd 23:59:59
 
# Scheduled resync
Set-VMReplication –VMName "RESYNC VM" -AutoResynchronizeEnabled 1 -AutoResynchronizeIntervalStart 00:00:00 -AutoResynchronizeIntervalEnd 06:00:00

 

To see the resynchronization settings in PowerShell, use the Get-VMReplication commandlet and look for the AutoResynchronizeEnabled, AutoResynchronizeIntervalStart, and AutoResynchronizeIntervalEnd fields:

Get-VMReplication -VMname "RESYNC VM" | fl *

 

 

 

The process of resynchronization

When the resync operation is triggered – either automatically or by the user – the following high-level sub-operations are executed in sequence:

  1. Check the VHD characteristics of primary and replica VMs:   before resync can be done, these have to match. Hyper-V Replica checks the geometry and size of the disk before starting resync. Top on the list of exceptions to watch out for are size mismatches – caused by resizing either a primary or replica VHD without appropriately resizing the other one.
  2. Start tracking the VHDs:   
    1. The guest writes are tracked into the log file, but these changes are not replicated until resync is completed.
    2. It is important to note that if resync takes too long then you might hit the “50% of total VHD size for a VM” condition and end up sending the VM into the “Resynchronization Required” state again.
    3. Event number 29242 is logged that specifies the VM, VHDs, start block, and end block.
  3. Create a diff disks for the replica VHDs:   this allows the resync operation to be cancelled without leaving the underlying VHD in an inconsistent state. The diff disk with all the resync-ed changes is then merged back into the VHD at the end of the resync operation.
  4. Compare and sync the VHDs:    the comparison of the VHDs is done block-by-block and only the blocks that differ are sent across the network. This can reduce the data sent over the network, depending on how different the two VHDs are. While this operation is going on:
    1. Pause Replicationwill stop the current resync operation. Doing Resume Replication later will continue the resync comparisons from where it left off.
    2. Planned failover or Test failover will not be possible.
    3. At any point the user can always do Unplanned Failover, but this will cancel the resync operation.
    4. Resync can be cancelled at any point. This will keep the VM in the “Resynchronization Required” state, and the next time replication is resumed, it will start from the beginning.
  5. Completion of compare and sync:     HVR logs event number 29244 once the compare and sync operation is done, and it specifies the VHD, VM, blocks sent, time taken, and result of the operation.
  6. Merge the resync changes to the VHD:     after this operation completes, the resync operation cannot be cancelled or undone.
  7. Delete the recovery points:   this is a significant side-effect of resync. The recovery points are built upon the VHD as a baseline. However, resync effectively changes that baseline and makes the data stored in those recovery points invalid. After resync completes, the recovery points are built again over a period of time.

 

 

Resynchronization performance

Resynchronization performance was tested and compared against the performance of Online Initial Replication (IR). The setup consisted of a standalone server with 4 running VMs – 2 File Servers and 2 SQL servers running typical workloads. Two VMs were replicated to a standalone Replica server. The network bandwidth was varied to see the impact. Data size that was replicated during Online IR was approximately 80GB.

 Network speedOnline IR sizeOnline IR timeResync sizeResync time
Resync – offline scheduling1 Gbps~80 GB~1.5 hrs~5.5 GB~2 hrs
Resync – immediate1 Gbps~80 GB~1 hr~100 MB~1 hr
      
Resync – offline scheduling 1.5 Mbps~80 GB4 days~10 GB~1 day
Resync – immediate1.5 Mbps~80 GB4 days~ 78 MB~1 hour

The tests indicate that resync is preferable to Online IR in low speed networks. When the two sites are connected by a high speed network, resync works well for low churn workloads.

There is also a perfmon counter for measuring the resynchronized bytes:  \Hyper-V Replica VM\Resynchronized Bytes.

 

Conclusion

The disks going out of sync is a rainy-day event in Hyper-V Replica. However with the Resynchronization operation, this is handled gracefully within the product to optimize the administrative overhead and the resources used in bringing the disks back into sync.

Hyper-V Replica Capacity Planner

$
0
0

Customers have frequently asked us for capacity planning guidance before deploying Hyper-V Replica – e.g.: “How much network bandwidth is required between the primary and replica site”, “How much storage is required on the primary and replica site”, “What is the storage impact by enabling multiple recovery points” etc.

The answer to the above and many other capacity planning questions is “It depends” – it depends on the workload, it depends on the IOPS headroom, it depends on the available storage etc. While one can monitor every single perfmon counter to make an informed decision, it is sometimes easier to have a readymade tool.

The Capacity Planner for Hyper-V Replica which was released on 5/22, allows you to plan your Hyper-V Replica deployment based on the workload, storage, network and server characteristics. The guidance is based on results gathered through our internal testing across different workloads.

You can download the tool and it’s documentation from here - http://www.microsoft.com/en-us/download/details.aspx?id=39057

Instructions:

1) Download the tool (exe) and documentation

2) Read the documentation first and then try out the tool. You should familiarize yourself with some nuances listed in the documentation before using the tool.

So go ahead, use the tool in your virtual infrastructure and share your feedback and questions through this blog post or in the community forum. We would love to hear your comments!

XenDesktop 7 Supports Windows Server 2012

$
0
0

We are excited to see the release of XenDesktop 7 and support our partner Citrix. XenDesktop 7 brings together both XenApp and XenDesktop functionality into a common release and now brings support for Windows Server 2012.  XenDesktop can easily be deployed on Hyper-V and take full advantage of Windows Server 2012 to increase agility, reduce cost, and provide a scalable and robust platform for desktop virtualization.

MMS 2013 Labs: Powered by Microsoft/HP Private Cloud...

$
0
0

MMS 2013 Hands On Labs

Virtualization Nation,

A few weeks ago we held the annual 2013 Microsoft Management Summit in Las Vegas. As in years past, the event sold out quickly and it was a very busy week. To everyone that attended, our sincere thanks.

As usual, the hands-on labs and instructor-led labs continue to be some of the most popular offerings at MMS. MMS Labs offer folks the opportunity to kick the tires on a wide array of Microsoft technologies and products. As usual the lines started early. For the fourth year in a row, all of the MMS Labs were 100% virtualized using Windows Server Hyper-V and managed via System Center by our partners at Xtreme Consulting Group and using HP servers and storage. Of course, this year we upgraded to the latest version so everything was running on a Microsoft Cloud powered by Windows Server 2012 Hyper-V and System Center 2012 SP1.

(BTW, I’ve blogged about this topic in the past years, if you’re interested the links are here and here.) Before I jump into the Microsoft Private Cloud, let me provide some context about the labs themselves.

What is a MMS Hand On Lab?

One of the reasons the MMS Hands on Labs are so popular is because it’s a firsthand opportunity to evaluate and work with Windows Server and System Center in a variety of scenarios at your own pace. Here’s a picture of some of the lab stations…

 

With the hands on labs, we’ve done all the work to create these scenarios based on your areas of interest. So, what does one of these labs look like on the backend? Let’s be clear, none of these labs are a single VM. That’s easy. Been there, done that. When you sit down and request a specific lab, the cloud infrastructure provisions the lab on highly available infrastructure and deploys services that can be anywhere from 4 – 12 virtual machines in your lab in seconds. There are over 650 different lab stations and we have to account for all types of deployment scenarios. For example,

  1. In the first scenario, all users sit down at 8 am and provision exactly the same lab. Or,
  2. In the second scenario, all users sit down at 8 am and provision unique, different labs. Or,
  3. In the third scenario, all users sit down at 8 am and provision a mix of everything

The lab then starts each lab in a few seconds. Let’s take a closer look at what some of the labs look like in terms of VM deployment.

MMS Lab Examples

Let’s start off with a relatively simple lab. This first lab is a Service Delivery and Automation lab. This lab uses:

  1. Four virtual machines
  2. 16 virtual processors
  3. 15 GB of memory total
  4. 280 GB of storage
  5. 2 virtual networks

…and here’s what each virtual machine is running…

 

Interested in creating virtualizing applications to deploy to your desktops, tablets, Remote Desktop Sessions? This next lab is a Microsoft Application Virtualization (App-V) 5.0 Overview lab. This lab uses:

  1. Seven virtual machines
  2. 14 virtual processors
  3. 16 GB of memory total
  4. 192 GB of storage
  5. 2 virtual networks 

 

How about configuring a web farm for multi-tenant applications? Here’s the lab which uses:

  1. Six virtual machines
  2. 24 virtual processors
  3. 16 GB of memory total
  4. 190 GB of storage
  5. 2 virtual networks

 

 

Ever wanted to enable secure remote access with RemoteApp, DirectAccess and Dynamic Access Control? Here’s the lab you’re looking for. This lab uses:

  1. Seven virtual machines
  2. 28 virtual processors
  3. 18 GB of memory total
  4. 190 GB of storage
  5. 2 virtual networks

 

Again, these are just a few of the dozens of labs ready for you at the hands on labs.

MMS 2013 Private Cloud: The Hardware

BTW, before I get to the specifics, let me point out that this Microsoft/HP Private Cloud Solution is an orderable solution available today...

Compute. Like last year, we used two HP BladeSystem c7000s for compute for the cloud infrastructure. Each c7000 had 16 nodes and this year we to upgraded to the latest BL460c Generation 8 Blades. All 32 blades were then clustered to create a 32 node Hyper-V cluster. Each blade was configured with:

  1. Two sockets with 8 cores per socket and thus 16 cores. Symmetric Multi-Threading was enabled and thus we had a total of 32 logical processors per blade.
  2. 256 GB of memory per blade with Hyper-V Dynamic Memory enabled
  3. 2 local disks 300 GB SAS mirrored for OS Boot per blade
  4. HP I/O Accelerator cards (either 768 GB or 1.2 TB) per blade

Storage. This year we wanted to have a storage backend that could take advantage of the latest storage advancements in Windows Server 2012 (such as Offloaded Data Transfer and SMI-S) so we decided to go with a 3Par StoreServ P10800 storage solution. The storage was configured as a 4 node, scale-out solution using 8 Gb fibre channel and configured with Multi-Path IO and two 16 port FC switches for redundancy. There was a total of 153.6 TB of storage configured with:

  1. 64 x 200 GB SSD disks
  2. 128 x 600 GB 15k FC disks
  3. 32 x 2 TB 7200k RPM SAS

As you can see, the 3Par includes SSD, 15k and 7200k disks. This is so the 3Par can provide automated storage tiering with HP’s Adaptive Optimization. With storage tiering, this ensures the most frequently used storage (the hot blocks) reside in the fastest possible storage tier whether that’s RAM, SSD, 15k or 7200k disks respectively. With storage tiering you can mix and match storage types to find the right balance of capacity and IOPs for you. In short, storage tiering rocks with Hyper-V. From a storage provisioning perspective, both SCVMM and the 3Par storage both support standards based storage management through SMI-S so the provisioning of the 3Par storage was done through System Center Virtual Machine Manager. Very cool.

Networking. From a networking perspective, the solution used VirtualConnect FlexFabric 10Gb/E and everything was teamed using Windows Server 2012 NIC Teaming. Once the network traffic was aggregated in software via teaming, that capacity was carved up in software.

Time for the Pictures…

Here’s a picture of the racks powering all of the MMS 2013 Labs. The two racks on the left with the yellow signs are the 3Par storage while the two racks on the right contain all of the compute nodes (32 blades) and management nodes (a two node System Center 2012 SP1 cluster). What you don’t see are the crowds gathered around pointing, snapping pictures, and gazing longingly…

 

MMS 2013: Management with System Center. Naturally, the MMS team used System Center to manage all the labs, specifically Operations Manager, Virtual Machine Manager, Orchestrator, Configuration Manager, and Service Manager. System Center 2012 SP1 was completely virtualized running on Hyper-V and was running on a small two node cluster using DL360 Generation 8 rackmount servers.

Operations Manager was used to monitor the health and performance of all the Hyper-V labs running Windows and Linux. Yes, I said Linux. Linux runs great on Hyper-V (it has for many years now) and System Center manages Linux very well… J To monitor health proactively, we used the ProLiant and BladeSystem Management Packs for System Center Operations Manager. The HP Management Packs expose the native management capabilities through Operations Manager such as:

  • Monitor, view, and get alerts for HP servers and blade enclosures
  • Directly launch iLO Advanced or SMH for remote management
  • Graphical View of all of the nodes via Operations Manager

In addition, 3Par has management packs that plug right into System Center, so Operations Manager was used to manage the 3Par storage as well…

 

 

…having System Center integration with the 3Par storage came in handy when one of the drives died and Operations Manager was able to pinpoint exactly what disk failed and in what chassis…

 

Of course, everything in this Private Cloud solution is fully redundant so we didn’t even notice the disk failure for some time…

In terms of managing the overall solution, here’s a view of some of the real time monitoring we were displaying and where many folks just sat and watched.

 

Virtual Machine Manager was used to provision and manage the entire virtualized lab delivery infrastructure and monitor and report on all the virtual machines in the system. In addition, HP has written a Virtual Machine Manager plug-in so you can view the HP Fabric from within System Center Virtual Machine Manager. Check this out:

 

 

It should go without saying that to support a lab of this scale and with only a few minutes between the end of one lab and the beginning of the next, automation is a key precept. The Hands on Lab team was positively gushing about PowerShell. “In the past, when we needed to provide additional integration it was a challenge. WMI was there, but the learning curve for WMI is steep and we’re system administrators. With PowerShell built-into WS2012, we EASILY created solutions and plugged into Orchestrator. It was a huge time saver.”

MMS 2013: Pushing the limit…

As you may know, Windows Server 2012 Hyper-V supports up to 64 nodes and 8,000 virtual machines in a cluster. Well, we have a history for pushing the envelope with this gear and this year was no different. At the very end of the show, the team fired up as many virtual machines to see how high we could go. (These were all lightly loaded as we didn’t have the time to do much more…) On Friday, the team fired up 8,312 virtual machines (~260 VMs per blade) running on a 32 node cluster. Each blade has 256 GB of memory each and we kept turning on VMs until all the memory was consumed.

MMS 2013: More data…

  • Over the course of the week, over 48,000 virtual machines were provisioned. This is ~8,000 more than last year. Here’s a quick chart. Please note that Friday is just a half day…

 

  • Average CPU Utilization across the entire pool of servers during labs hovered around 15%. Peaks were recorded a few times at ~20%. In short, even with thousands of Hyper-V VMs running on a 32 node cluster, we were barely taxing this well architected and balanced system.
  • While each blade was populated with 256 GB, they weren’t maxed. Each blade can take up to 384 GB.
  • Storage Admins: Disk queues for each of the hosts largely remained at 1.0 (1.0 is nirvana). When 3200 VMs were deployed simultaneously, the disk queue peaked at 1.3. Read that again. Show your storage admins. (No, those aren’t typos.)
  • The HP I/O Accelerators used were the 768 GB version and 1.2 TB versions. The only reason we used a mix of different sizes because that’s what we had available.
  • All I/O was configured for HA and redundancy.
    • Network adapters were teamed with Windows Server 2012 NIC Teaming
    • Storage was fibre channel and was configured with Active-Active Windows Server Multi-Path I/O (MPIO). None of it was needed, but it was all configured, tested and working perfectly.
  • During one of the busiest days at MMS 2013 with over 3500 VMs running simultaneously, this configuration wasn’t even breathing hard. It’s truly a sight to behold and a testament to how well this Microsoft/HP Private Cloud Solution delivers.

From a management perspective, System Center was the heart of the system providing health monitoring, ensuring consistent hardware configuration and providing the automation that makes a lab this complex successful. At its peak, with over 3500 virtual machines running, you simply can’t work at this scale without pervasive automation.

From a hardware standpoint, the HP BladeSystem and 3Par storage are simply exceptional. Even at peak load running 3500+ virtual machines, we weren’t taxing the system. Not even close. Furthermore, the fact that the HP BladeSystem and 3Par storage integrate with Operations Manager, Configuration Manager and Virtual Machine Manager provides incredible cohesion between systems management and hardware. When a disk unexpectedly died, we were notified and knew exactly where to look. From a performance perspective, the solution provides a comprehensive way to view the entire stack. From System Center we can monitor compute, storage, virtualization and most importantly the workloads running within the VMs. This is probably a good time for a reminder…

If you’re creating a virtualization or cloud infrastructure, the best platform for Microsoft Dynamics, Microsoft Exchange, Microsoft Lync, Microsoft SharePoint and Microsoft SQL Server is Microsoft Windows Server with Microsoft Hyper-V managed by Microsoft System Center. This is the best tested, best performing, most scalable solution and is supported end to end by Microsoft.

One More Thing...

Finally, I’ve been talking about Windows Server and System Center as part of our Microsoft Private Cloud Solution. I’d also like to point out that Windows Server 2012 Hyper-V is the same rock-solid, high performing and scalable hypervisor we use to power Windows Azure too.

Read that again.

That’s right. Windows Azure is powered by Windows Server 2012 Hyper-V. See you at TechEd.

Jeff Woolsey
Windows Server & Cloud

P.S. Hope to see you at the Hands on Lab at TechEd!

 

More pictures below…

Here’s a close up of one of the racks. This rack has one of the c7000 chassis with 16 nodes for Hyper-V. It also includes the two managements heads clustered used for System Center. At the bottom of the rack are the Uninterruptible Power Supplies.

 

 

 …and here’s the back of one of the racks that held a c7000…

 

 

HP knew there was going to be a lot of interest, so they created full size cardboard replicas diagraming the hardware in use.

…and here’s one more…

  

 

 

 

 


Using SMB shares with Hyper-V Replica

$
0
0

SMB is getting a lot of attention with Windows Server 2012, and we’ve had questions from a few customers regarding the inter-play between SMB shares and Hyper-V Replica. In this post we’ll share our experience around setting up and using various configurations involving SMB shares and Hyper-V Replica. The issue we were expecting to run into is the apparent lack of authorization to use the SMB share, when using remote management.

 

In all the scenarios that are investigated, we will start from a remote management node (mgmtnode.contoso.com). We will try to set up the scenario from this management node, and work through the errors encountered. In order to visualize what this means, all the scenarios will look roughly like this:

001 base architecture

Scenario #1: Single Replica server with SMB share

The building blocks

  • A single Hyper-V server (aashish-server.contoso.com) on the Replica site
  • A single server (aashish-server3.contoso.com) hosting an SMB share \\aashish-server3\Replica-Site that will be used to store the Replica VMs.
  • A single remote management server (mgmtnode.contoso.com)

Setting up the infrastructure

To start with, we will try using the Hyper-V Manager UI. On the management node (mgmtnode.contoso.com), open the Hyper-V Manager UI and add the server aashish-server on the left-side pane using “Connect to Server…”. Now enable aashish-server as a Replica server using the Hyper-V Settings on the right-side pane. As expected, we run into an error:

002 single server UI failure

 

The error encountered is “Failed to add authorization entry. Unable to open specified location to store Replica files ‘\\aashish-server3\Replica-Site\’. Error: 0x80070005 (General access denied error).”, and it is not a very helpful error message. Hopefully this blog can help alleviate that situation.

Fixing the error

While the standard answer to fixing this error will be to setup constrained delegation, this is not something that is always optimal. Yes, the core issue is around delegation of credentials when there is an additional hop (mgmtnode–> aashish-server–> aashish-server3). However, depending on your setup and how often you plan to change the Replica server settings, there are simpler solutions.

  1. Remote into aashish-server directly as set up the Replica server – this eliminate the hop that causes problems. With just one server to configure, this will be the simplest solution for your needs.
  2. Use CredSSP and PowerShell to delegate credentials and set up the Replica server, and we will be exploring this later in the blog post. This is an excellent option for users where:
    1. No domain controller access is possible.
    2. No Remote access is possible.
    3. The Windows Server UI is not present on any node other than the management node.
  3. Set up constrained delegation in your domain controller. This option has been explored extensively by others and there is ample material on this available online.

Scenario #2: Multiple Replica servers (unclustered) with SMB share

For all practical purposes this is like the single Replica server scenario discussed above, except that you will have to remote into each server and setup replication. At even 5 servers, this is a painful exercise. Constrained Delegation starts to look like an increasingly attractive option. Yet, without access to the domain controller perhaps the realistic route is that of CredSSP and PowerShell – and that will be something we will cover in detail in this post.

Scenario #3: Replica cluster with SMB share

The building blocks

  • A failover cluster (AAR-130612) on the Replica site having the .contoso.com domain. This consists of two servers (aashish-s1, aashish-s2), and a Replica Broker (AARBrk-130612). The broker can be present on either node, but in this example we will assume that it resides on aashish-s2.
  • A single server (aashish-server3.contoso.com) hosting an SMB share \\aashish-server3\Replica-Site that will be used to store the Replica VMs.
  • A single remote management server (mgmtnode.contoso.com)

003 cluster

Setting up the infrastructure

As with the non-clustered scenarios, you will run into the General access denied error when you use the Failover Cluster UI to change the replication settings.

004 replicabroker failure

Trying this through PowerShell will give you a similar error:

Set-VMReplicationServer : Failed to add authorization entry. Unable to open specified location to store Replica files '\\aashish-server3\Replica-Site'. Error: 0x80070005 (General access denied error). You do not have permission to perform the operation. Contact your administrator if you believe you should have permission to perform this operation.
  At line:1 char:1
  + Set-VMReplicationServer -ComputerName AARBrk-130612 -AllowedAuthenticationType ...
  + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     + CategoryInfo          : PermissionDenied: (Microsoft.HyperV.PowerShell.VMTask:VMTask) [Set-VMReplicationServer], VirtualizationOperationFailedException
     + FullyQualifiedErrorId : AccessDenied,Microsoft.HyperV.PowerShell.Commands.SetReplicationServerCommand

Fixing the error

Although the cluster has multiple nodes, the scenario is similar to that of a single Replica server. When using clusters, it is sufficient to make the changes to the Hyper-V Replica Broker (AARBrk-130612 in this case), and the replication settings will be propagated to the rest of the cluster nodes. So depending on your setup and how often you plan to change the Replica Broker settings, there are a few options to consider:

  1. Remote into the server on which the Replica Broker is running (aashish-s2 in this case), use the Failover Cluster UI and directly set up the Replica – eliminate the hop that is causing problems. In most cases, the replication configuration is a one-time operation, so this could be the simplest solution for your needs.
  2. Remote into any server in the cluster and use the PowerShell command-let Set-VMReplicationServer. There is no need to use –ComputerName in the parameters as the command-let itself is cluster-aware.
  3. Use CredSSP and PowerShell to delegate credentials and set this up if options 1 and 2 are not for you.
  4. Set up constrained delegation in your domain controller.

 

Using CredSSP and PowerShell

This option is interesting from many angles. The big attraction is PowerShell; with administrators increasingly moving to PowerShell to automate and manage their infrastructure, getting things done through PowerShell is sometimes more important than through a UI. Just as important is the fact that remoting into a Replica server is not always feasible or advisable – and hence the management node from where all actions are performed. The CredSSP-based solution fits neatly into such a scenario.

For enabling the delegation of credentials, run the following commands on the management node:

  • Enable-WSManCredSSP –Role Client –DelegateComputer aashish-s1.contoso.com
  • Invoke-Command –ComputerName aashish-s1.contoso.com –ScriptBlock { Enable-WSManCredSSP –Role Server }

Once this is done, you follow-up with Set-VMReplicationServer but run on the cluster host that you have just delegated to:

  • Invoke-Command –ComputerName aashish-s1.contoso.com –Authentication Credssp –Credential DOMAIN1\user1 –ScriptBlock { Set-VMReplicationServer …}

where DOMAIN1 and user1 are authenticated on aashish-s1.contoso.com. Note that you do not need to use –ComputerName with the Set-VMReplicationServer command because it is cluster aware!

 

Adding a node to this cluster

So what happens if you add a new node to this Replica cluster? No worries! The replication settings are propagated to this node also with no additional steps on your part. If you need to change the replication settings, you can use the same steps outlined without worrying about the new server.

 

Concluding note

Hopefully this will set you back on track with Hyper-V Replica and SMB shares. Give this a go and share your experience with us – we would love to hear your feedback!

Save network bandwidth by using Out-of-Band Initial Replication method in Hyper-V Replica

$
0
0

In our recent conversation with customers about Hyper-V Replica, the questions that came up from a few customers was –

  • Is there a way to perform the Initial Replication (IR) for VMs without stressing our organizations internet bandwidth?
  • Initial replication of our VMs take weeks to complete, is there a faster way to get the data across to our secondary datacenter?

The answer is “Yes”. Hyper-V Replica supports an option where you can transport the initial copy of your VM to the Replica site using an external storage medium - like a USB drive. This method of seeding the Replica site is known is Out-of-Band Initial Replication (OOB IR) and is the focus of this blog post.

OOB IR is especially helpful if you have a large amounts of data to be replicated and the datacenters are not connected using a very high speed network. As an example, it will take around 20 days to complete initial replication on 2 TB of data if the network link between the Primary site and Replica site is 10 Mbps.

The following steps walk you through the process of using OOB IR.  

Steps to take on the Primary Site

  1. Connect your external storage medium (e.g. USB drive) to the Hyper-V Host where the VM is running. In the example below the USB drive on the Hyper-V Host is mounted under the drive letter F:\.  If your Primary Server is a
  • Standalone Hyper-V Host - ensure that you connect the external media directly to the Hyper V Host on which the virtual machine is hosted.
  • Failover Hyper-V Cluster– ensure you connect the external medial directly to the Owner Node for the VM. The owner node for a VM can be identified through the Failover Cluster Manager MMC. For e.g. In the below screen shot the Owner Node for the SQLDB_MyApplication VM is HV-CLUS-01
  • Img4
  • Initiate replication wizard by right-clicking on the VM and selecting ‘Enable Replication’
  • Go through the wizard till you reach the ‘Choose Initial Replication Method’ screen.
  • This page allows you to choose how you want to transfer the initial copy of the virtual machine to the Replica site. The 3 options here are:
    • Send initial copy over the network
    • Send initial copy using external media
    • Use an existing virtual machine on the Replica server as the initial copy.
    • Img1
  • Choose the second option - ‘Send initial copy using external media’ – and specify a location where the initial copy should be stored. In our example, we have chosen a location in the USB drive.
  • On the summary screen click Finish
  • The virtual machine will be enabled for replication and initial replica will be created in the folder mentioned in step 5. A placeholder VM is created on the Replica site as a part of the enable replication process.
    • Img2
  • Once ‘Sending Initial Replica’ finishes for all the replication-enabled VMs, the external storage medium can be shipped to the Replica site.
  • From this point onwards the changes that happen to the VM will be replicated over and will be applied on to the placeholder VM on the Replica site. These changes will be merged with the OOB IR data once it is imported at the replica site.  

    Note: For security of the data, it is recommended that the external storage media be encrypted using encryption technologies like BitLocker.

    The same steps can be achieved using PowerShell:

    First enable replication for the VM using the following command-let

    Enable-VMReplication –VMName SQLDB_MyApplication –ReplicaServerName ReplicaServer.Contoso.com –ReplicaServerPort 80 –AuthenticationType Kerberos 

    Then export the Initial Replica using the following command-let

    Start-VMInitialReplication –VMName SQLDB_MyApplication –DestinationPath F:\VirtualMachineData\ 

    Steps to take on the Replica Site

    1. Once the external storage medium is received at the Replica site, request the site administrator to do one of the following
      If your Replica Server is a
    • Standalone Hyper-V Host - ensure that the external media is connected directly to the Hyper V Host or copy the data from the external media into a local folder on the Hyper V Host.
    • Failover Hyper-V Cluster–ensure the external media is connected directly to the Owner Node of the replica VM or copy the data from the external media to cluster shared volume.
  • On the Replica VM, complete the OOB IR process by choosing Replication -> Import Initial Replica… from the context menu as shown below
    • Img3
  • Provide the location of the VM’s initial copy. You can recognize the folder in which the replica is stored by checking for the folder name which starts with the name of your VM. In my case the VM was called SQLDB_MyApplication and the folder name is   D:\VMInitialReplica\SQLDB_MyApplication_A60B7520-724D-4708-8C09-56F6438930D9.
  • Click on ‘Complete Initial Replication’ to import the initial copy and merge it with the placeholder VM. Once the import is completed the Replica VM has been created.
  • From this point onwards your VM is protected and will allow you to perform operations like Failover and Test Failover.
  • The same steps can be achieved using PowerShell
    Copy the Initial Replica onto a local drive on the replica server (say D:\VirtualMachineData\) and then run the below command-let to import the initial replica.

    Import-VMInitialReplication –VMName SQLDB_Application_Payroll  -Path D:\VirtualMachineData\ SQLDB_MyApplication_A60B7520-724D-4708-8C09-56F6438930D9 

    Hyper V Replica offers one more method for Initial Replication that utilizes a backup copy of the VM to seed the replication, will cover that in our next blog post.

    Enabling Linux Support on Windows Server 2012 R2 Hyper-V

    $
    0
    0

    This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Linux Support on Windows Server 2012 R2 and how it applies to Brad’s larger topic of “Transform the Datacenter”.  To read that post and see the other technologies discussed, read today’s post:  “What’s New in 2012 R2:  Enabling Open Source Software.” 

    The ability to provision Linux on Hyper-V and Windows Azure is one of Microsoft’s core efforts towards enabling great Open Source Software support. As part of this initiative, the Microsoft Linux Integration Services (LIS) team pursues ongoing development of enlightened Linux drivers that are directly checked in to the Linux upstream kernel thereby allowing direct integration into upcoming releases of major distributions such as CentOS, Debian, Red Hat, SUSE and Ubuntu.

    The Integration Services were originally shipped as a download from Microsoft’s sites. Linux users could download and install these drivers and contact Microsoft for any requisite support. As the drivers have matured, they are now delivered directly through the Linux distributions. Not only does this approach avoid the extra step of downloading drivers from Microsoft’s site but it also allows users to leverage their existing support contracts with Linux vendors.

    For example Red Hat has certified enlightened drivers for Hyper-V on Red Hat Enterprise Linux (RHEL) 5.9 and certification of RHEL 6.4 should be complete by summer 2013. This will allow customers to directly obtain Red Hat support for any issues encountered while running RHEL 5.9/6.4 on Hyper-V.

    To further the goal of providing great functionality and performance for Linux running on Microsoft infrastructure, the following new features are now available on Windows Server 2012 R2 based virtualization platforms:

     

    1. Linux Synthetic Frame Buffer driver – Provides enhanced graphics performance and superior resolution for Linux desktop users.
    2. Linux Dynamic memory support – Provides higher virtual machine density/host for Linux hosters.
    3. Live Virtual Machine Backup support – Provisions uninterrupted backup support for live Linux virtual machines.
    4. Dynamic expansion of fixed size Linux VHDs – Allows expansion of live mounted fixed sized Linux VHDs.
    5. Kdump/kexec support for Linux virtual machines – Allow creating kernel dumps of Linux virtual machines.
    6. NMI (Non-Maskable Interrupt) support for Linux virtual machines – Allows delivery of manually triggered interrupts to Linux virtual machines running on Hyper-V.
    7. Specification of Memory Mapped I/O (MMIO) gap – Provides fine grained control over available RAM for virtual appliance manufacturers.

    All of features have been integrated in to SUSE Linux Enterprise Server 11 SP3 which can be downloaded from SUSE website (https://www.suse.com/products/server/). In addition integration work is in progress for the upcoming Ubuntu 13.10 and RHEL 6.5 releases.

     Further details on these new features and their benefits are provided in the following sections:

     1.       Synthetic Frame Buffer Driver

     The new synthetic 2D frame buffer driver provides solid improvements in graphics performance for Linux virtual machines running on Hyper-V. Furthermore, the driver provides full HD mode resolution (1920x1080) capabilities for Linux guests hosted in desktop mode on Hyper-V.

     One other noticeable impact of the Synthetic Frame Buffer Driver is elimination of the double cursor problem.  While using desktop mode on older Linux distributions several customers reported two visible mouse pointers that appeared to chase each other on screen. This distracting issue is now resolved through the synthetic 2D frame buffer driver thereby improving visual experience on Linux desktop users.

     2.       Dynamic Memory Support

     The availability of dynamic memory for Linux guests provides higher virtual machine density per host. This will bring huge value to Linux administrators looking to consolidate their server workloads using Hyper-V. In house test results indicate a 30-40% increase in server capacity when running Linux machines configured with dynamic memory.

     The Linux dynamic memory driver monitors the memory usage within a Linux virtual machine and reports it back to Hyper-V on a periodic basis. Based on the usage reports Hyper-V dynamically orchestrates memory allocation and deallocation across various virtual machines being hosted. Note that the user interface for configuring dynamic memory is the same for both Linux and Windows virtual machines.

     

    The dynamic Memory driver for Linux virtual machines provides both Hot-Add and Ballooning support and can be configured using the Start, Minimum RAM and Maximum RAM parameters as shown in Figure 1.

    Upon system start the Linux virtual machine is booted up with the amount of memory specified in the Start parameter.

    If the virtual machine requires more memory then Hyper-V uses the Hot-Add mechanism to dynamically increase the amount of memory available to the virtual machine.

    On the other hand, if the virtual machine requires less memory than allocated then Hyper-V uses the ballooning mechanism to reduce the memory available to the virtual machine to a more appropriate amount.

     

    Figure 1 Configuring a Linux virtual machine with Dynamic Memory

     Increase in virtual machine density is an obvious advantage of use of dynamic memory. Another great application is the use of dynamic memory in scaling application workloads. The following paragraphs illustrate an example of a web server that was able to leverage dynamic memory to scale operations in the event of increasing client workload.

     For illustrative purposes, two apache servers hosted inside separate Linux virtual machines were set up on a Hyper-V server. One of the Linux virtual machines was configured with a static RAM of 786 MB whereas the other Linux virtual machine was configured with dynamic memory. The dynamic memory parameters were setup as follows: Startup RAM was set to 786MB, Maximum RAM was set to 8GB and the Minimum RAM was set to 500MB. Next both apache server were subjected to monotonically increasing web server workload through a client application hosted in a Windows virtual machine.

     Under the static memory configuration, as the apache server becomes overloaded, the number of transactions/second that could be performed by the server continue to fall due to high memory demand. This can be observed in Figure 2 and Figure 3. Figure 2 represents the initial warm up period when there is ample free memory available to the Linux virtual machine hosting apache. During this period the number of transactions/second is as high as 103 with an average latency/transaction of 58ms.

     

     Figure 2 Server and Client statistics during initial warm up period for the Linux apache server configured with static RAM

     As the workload increases and the amount of free memory becomes scarce, the number of transactions/second drops to 32 and the average latency/transaction increases to 485ms. This situation can be observed in Figure 3.

     

     Figure 3 Server and client statistics for an overloaded Linux apache server configured with static RAM

     Next consider the case of the apache server hosted in a Linux virtual machine configured with dynamic memory. Figure 4 shows that for this server the amount of available memory quickly ramps up through Hyper-V’s hot-add mechanism to over 2GB and the number of transactions/second is 120 with an average latency/transaction of 182 ms during the warm up phase itself.

      

    Figure 4 Server and client statistics during startup phase of Linux apache server configured with Dynamic RAM

     As the workload continues to increase, over 3GB of free memory becomes available and therefore the server is able to sustain the number of transactions/second at 130 even though average latency/transaction increases to 370ms. Notice that this memory gain can only be achieved if there is enough memory available on the Hyper-V server host. If the Hyper-V host memory is low then any demand for more memory by a guest virtual machine may not be satisfied and applications may receive no free memory errors.

     

    Figure 5 Overloaded Linux apache server configured with Dynamic RAM

    3.       Live Virtual Machine Backup Support

    A much requested feature from customers running Linux on Hyper-V is the ability to create seamless backups of live Linux virtual machines. In the past customers had to either suspend or shutdown the Linux virtual machine for creating backups. Not only is this process hard to automate but it also leads to an increase in down time for critical workloads.

    To address this shortcoming, a file-system snapshot driver is now available for Linux guests running on Hyper-V. Standard backup APIs available on Hyper-V can be used to trigger the driver to create file-system consistent snapshots of VHDs attached to a Linux virtual machine without disrupting any operations in execution within the virtual machine.

    The best way to try out this feature is to take a backup of a running Linux virtual machine through Windows Backup. The backup can be triggered from the Windows Server Backup UI as shown in Figure 6. As can be observed the live virtual machine labeled OSTC-Workshop-WWW2 is going to be backed up. Once the backup operation completes a message screen similar to Figure 7 should be visible.

     

     Figure 6 Using Windows Server Backup to backup a live Linux virtual machine

     Figure 7 Completion of backup operation for a live Linux virtual machine

    One important difference between the backups of Linux virtual machines and Windows virtual machines is that Linux backups are file-system consistent only whereas Windows backups are file-system and application consistent. This difference is due to lack of standardized Volume Shadow Copy Service (VSS) infrastructure in Linux.

    4.       Dynamic Expansion of Live Fixed Sized VHDs

    The ability to dynamically resize a fixed sized VHD allows administrators to allocate more storage to the VHD while keeping the performance benefits of the fixed size format. The feature is now available for Linux virtual machines running on Hyper-V. It is worth noting that Linux file-systems are quite adaptable to dynamic changes in size of the underlying disk drive. To illustrate this functionality let us look at how a fixed sized VHD attached to a Linux virtual machine can be resized while it is mounted.

    First, as shown in Figure 8, a 1GB fixed sized VHD is attached to a Linux virtual machine through the SCSI controller. The amount of space available on the VHD can be observed through the df command as shown in Figure 9.

    Figure 8 Fixed Sized VHD attached to a Linux virtual machine through the SCSI Controller

     

    Figure 9 Space usage in the Fixed Sized VHD

    Next, a workload is started to consume more space on the fixed sized VHD. While the workload is running, when the amount of used space goes beyond the 50% mark (Figure 10), the administrator may increase the size of the VHD to 2GB using the Hyper-V manager UI as shown in Figure 11.

     

    Figure 10 Amount of used space goes beyond 50% of the current size of the Fixed Sized VHD

      

    Figure 11 Expanding a Fixed Size VHD from 1GB to 2GB

    Once the VHD is expanded, the df command will automatically update the amount of disk space to 2GB as shown in Figure 12. It is important to note that both the disk and the file-system adapted to the increase in size of the VHD while it was mounted and serving a running workload.

     

    Figure 12 Dynamically adjusted df statistics upon increase in size of Fixed Sized VHD

    5.       Linux kdump/kexec support

    One particular pain point for hosters running Linux on Windows Server 2012 and Windows Server 2008 R2 environments is that legacy drivers (as mentioned in KB 2858695 ) must be used to create kernel dumps for Linux virtual machines.

    In Windows Server 2012 R2, the Hyper-V infrastructure has been changed to allow seamless creation of crash dumps using enlightened storage and network drivers and therefore no special configurations are required anymore. Linux users are free to dump core over the network or the attached storage devices.

    6.       NMI Support

    If a Linux system becomes completely unresponsive while running on Hyper-V, users now have the option to panic the system by using Non-Maskable Interrupts (NMI). This is particularly useful for diagnosing systems that have deadlocked due to kernel or user mode components.

    The following paragraphs illustrate how to test this functionality. As a first step observe if any NMIs are pending in your Linux virtual machines by executing the command in a Linux terminal session shown in Figure 13:

     

    Figure 13 Existing NMIs issued to the Linux virtual machine

    Next, issue an NMI from a powershell window using the command shown below:

    Debug-VM -Name <Virtual Machine Name> -InjectNonMaskableInterrupt -ComputerName <Hyper-V host name> Confirm:$False –Force

    Next check if the NMI has been delivered to the Linux VM by repeating the command shown in Figure 13. The output should be similar to what is shown in Figure 14 below:

     

    Figure 14 New NMIs issued to the Linux virtual machine

    7.       Specification of Memory Mapped I/O (MMIO) gap

    Linux based appliance manufacturers use the MMIO gap (also known as PCI hole) to divide the available physical memory between the Just Enough Operating System (JeOS) that boots up the appliance and the actual software infrastructure that powers the appliance. Inability to configure the MMIO gap causes the JeOS to consume all of the available memory leaving nothing for the appliance’s custom software infrastructure. This shortcoming inhibits the development of Hyper-V based virtual appliances.

    The Windows Server 2012 R2 Hyper-V infrastructure allows appliance manufacturers to configure the location of the MMIO gap. Availability of this feature facilitates the provisioning of Hyper-V powered virtual appliances in hosted environments. The following paragraphs provide technical details on this feature.

    The memory of a virtual machine running on Hyper-V is fragmented to accommodate two MMIO gaps.  The lower gap is located directly below the 4GB address.  The upper gap is located directly below the 128GB address.  Appliance manufacturers can now set the lower gap size to a value between 128MB and 3.5GB. This indirectly allows specification of the start address of the MMIO gap. 

    The location of the MMIO gap can be set using the following sample PowerShell script functions:

    ############################################################################
    #
    # GetVmSettingData()
    #
    # Getting all VM's system settings data from the host hyper-v server
    #
    ############################################################################

    functionGetVmSettingData([String] $name, [String] $server)
    {
        $settingData = $null

        if (-not$name)
        {
                return $null
        }

        $vssd = gwmi -n root\virtualization\v2 -class Msvm_VirtualSystemSettingData -ComputerName$server
       
     if (-not $vssd)
        {
            return $null
        }

        foreach ($vm in $vssd)
        {
            if ($vm.ElementName -ne$name)
            {
                continue
            }

            return $vm
        }

        return$null
    }

    ###########################################################################
    #
    # SetMMIOGap()
    #
    # Description:Function to validate and set the MMIO Gap to the linux VM
    #
    ###########################################################################
    functionSetMMIOGap([INT] $newGapSize)
    {

        #
        # Getting the VM settings
        #
        $vssd = GetVmSettingData$vmName $hvServer
        if (-not $vssd)
        {
            return$false
        }

        #
        # Create a management object
        #
        $mgmt = gwmi -n root\virtualization\v2 -class Msvm_VirtualSystemManagementService -ComputerName $hvServer
        if(-not $mgmt)
        {
            return$false
        }

        #
        # Setting the new MMIO gap size
        #
        $vssd.LowMmioGapSize = $newGapSize

        $sts = $mgmt.ModifySystemSettings($vssd.gettext(1))

           if ($sts.ReturnValue -eq 0)
        {
            return $true
        }

        return $false
    }

    The location of the MMIO gap can be verified by searching the keyword “pci_bus” in the post boot dmesg log of the Linux virtual machine. This output containing the keyword should provide the start memory address of the MMIO gap. The size of the MMIO gap can then be verified by subtracting the start address from 4GB represented in hexadecimal.

    Summary

    Over the past year, the LIS team added a slew of features to enable great support for Linux virtual machines running on Hyper-V. These features will not only simplify the process of hosting Linux on Hyper-V but will also provide superior consolidation and improved performance for Linux workloads. As of now the team is actively working with various Linux vendors to bring these features in newer distribution releases. The team is eager to hear customer feedback and invites any feature proposals that will help improve Linux hosters experience on Hyper-V. Customers may get in touch with the team through linuxic@microsoft.com or thorough the Linux Kernel Mailing List (https://lkml.org/).

    To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive

    Advanced Tools and Scripting with PowerShell 3.0

    $
    0
    0

    Join our next Microsoft Virtual Academy (MVA) Jump Start free online training event on August 1, featuring Microsoft’s Jeffrey Snover, Distinguished Engineer and Lead Architect for the Windows Server division and inventor of PowerShell, together with Jason Helmick, Senior Technologist at Concentrated Technology. Find out how to turn your real time management and automation scripts into useful reusable tools and cmdlets. You’ll learn the best patterns and practices for building and maintaining tools and you’ll pick up some special tips and tricks along the way. Register online today! http://aka.ms/AdvPwShl

    Get the Scoop - FREE Frozen Custard and Hyper-V

    $
    0
    0

    VMworld is in full swing and Microsoft is there to actively participate.  One obvious question that VMworld attendees (and IT professionals that are familiar with VMware) are probably asking right now – “why would I want to learn about Hyper-V while attending VMworld?” The answer is simple – to help their careers as technology professionals. Research shows that over 70% of businesses now have more than one virtualization platform in their IT environment. As you can imagine, this trend is opening up opportunities for IT professionals that are familiar with more than one virtualization platform. And if you look at the market data, it is clear that Hyper-V is the one to watch (and try!) .

    Sound interesting? Get more information on our activities at VMworld 2013 from Microsoft Senior PMM Varun Chhabra in his post, “Get the “Scoop” on Hyper-V During VMworld”. Well worth the read on how we compare.

    For those of you interested in downloading some of the products and trying them, here are some resources to help you:

    • Windows Server 2012 R2 Preview download
    • System Center 2012 R2 Preview download
    • SQL Server 2014 Community Technology Preview 1 (CTP1) download
    • Windows 8.1 Enterprise Preview download

    As always, follow us on Twitter via @MSCloud!  And if you would like to follow Microsoft VP Brad Anderson, do that via @InTheCloudMSFT !

    Using an existing VM for initial replication in Hyper-V Replica

    $
    0
    0

    Hyper-V Replica provides three methods to do initial replication:

    1. Send data over the network (Online IR)
    2. Send data using external media (OOB IR)
    3. Use an existing virtual machine as the initial copy

    Each option for initial replication has a specific scenario for which it excels. In this post we will dive into the underlying reasons for including option 3 in Hyper-V Replica, the scenarios where it is advantageous, and cover its usage. This blog post is co-authored by Shivam Garg, Senior Program Manager Lead.

     

    Choosing an existing virtual machine

    This method of initial replication is rather self-explanatory – it takes an existing VM on the replica site as the baseline to be synced with the primary. However, it’s not enough to pick any virtual machine on the replica site to use as an initial copy. Hyper-V Replica places certain requirements on the VM that can be used in this method of initial replication:

    1. It has to have the same virtual machine ID as that of the primary VM
    2. It should have the same disks (and disk properties) as that of the primary VM

    Given the restrictions placed on the existing VM that can act as an initial copy, there are a few clear ways to get such a VM:

    • Restore the VM from backup. Historically, the disaster recovery strategy for most companies involved taking backups and restoring the datacenter from these backups. This strategy also implies that there is a mechanism in place to transport the backed-up data to the recovery site. This makes the backed-up copies an excellent start point for Hyper-V Replica’s disaster recovery process. The data will be older – depending on the backup policies – but it will satisfy the criteria to use this initial replication method. Of course, it is suggested to use the latest backup data so as to keep the delta changes to the minimum.
    • Export the VM from the primary and import on the replica. Of course, the exported VM needs to be transported to the other site so this option is similar to out-of-band initial replication using external media.
    • Use an older Replica VM. When a replication relationship is removed, the Replica VM remains – and this VM can be used as the initial copy when replication is enabled again for the same VM in the future.

     

    Syncing the primary and Replica VMs

    Although there is a complete VM on the replica side, the Replica VM lags behind the primary VM in terms of the freshness of the data. So as a part of the initial replication process the two VMs have to be brought into sync. This process is very similar to resynchronization and is very IOPS intensive. Depending on the differences between the primary and Replica VHDs, there could also be significant network traffic to transfer the delta changes from the primary site to the replica site.

     

    When to use this initial replication method

    The biggest advantage that comes from using an existing VM is that the VHDs are already present on the replica site. But this is also based on the assumption that most of the data is already present in those VHDs. For example, when restoring the VM from backup, the backup copy would be a few hours behind the primary… perhaps a day behind. The assumption is that the delta changes [between the restored VM and the current primary VM] are small enough to be sent over the network. Thus the data difference between the primary VHDs and Replica VHDs should not be too large – otherwise Online IR would be more efficient from an IOPS perspective.

    We also need to consider the size of the VHDs. If the primary VM has large VHDs then Online IR might not be preferred to begin with, and OOB IR would be used for initial replication. However, if the set of delta changes that can be sent over the network is small enough then this method could be quicker than OOB IR as well. Thus if the data difference between the primary VHDs and Replica VHDs is large and the VHDs are also large, then it might be simpler to use OOB IR. With large VHDs and a large data difference between primary and Replica VHDs, this replication method will consume a large number of IOPS and choke the network.

    Thus a replication scenario that involves (1) large VHDs that to be replicated and (2) a smaller set of delta changes for syncing [when compared to the size of the VHDs] will be an attractive option for using an existing virtual machine for initial replication.

     

    Making this happen with UI and PowerShell

    Using this option through the UI is extremely simple – you simply need to select the option with “Use an existing virtual machine on the Replica server as the initial copy”. This option is presented to you during the Enable Replication wizard.

    image

     

    When using PowerShell, there is a sequence of 3 commands that need to be executed:

    PS C:\> Enable-VMReplication -ComputerName replica.contoso.com -VMName Test-VM -AsReplica
    PS C:\> Enable-VMReplication -ComputerName primary.contoso.com -VMName Test-VM -ReplicaServerName replica.contoso.com -ReplicaServerPort 80 -AuthenticationType Kerberos
    PS C:\> Start-VMInitialReplication -ComputerName primary.contoso.com -VMName Test-VM -UseBackup

    The –UseBackup option in the Start-VMInitialReplication commandlet is the one that indicates the use of an existing VM on the replica site for the purposes of initial replication.

    As with the other methods of initial replication, you can also schedule when the initial replication process occurs.

     

    Working with clusters

    If the Replica VM is on a cluster, ensure that it is made Highly Available (HA) before any further actions are taken. This is a prerequisite and it enables the VM to be picked up by the Failover Cluster service – and consequently by the Hyper-V Replica Broker.

    image

     

    Failing to do so will throw errors similar to this (Event ID 29410):

    Cannot perform the requested Hyper-V Replica operation for virtual machine 'Test-VM' because the virtual machine is not highly available. Make virtual machine highly available using Microsoft Failover Cluster Manager and try again. (Virtual machine ID 6DDC63C1-0135-40CA-B998-A606D91080E9)

     

    Also, the replica server used in the commandlets and the UI will be the name of the Hyper-V Replica Broker instance in the cluster.

    PS C:\> Enable-VMReplication -ComputerName replicabroker.contoso.com -VMName Test-VM –AsReplica
    PS C:\> Enable-VMReplication -ComputerName primary.contoso.com -VMName Test-VM -ReplicaServerName replicabroker.contoso.com -ReplicaServerPort 80 -AuthenticationType Kerberos
    PS C:\> Start-VMInitialReplication -ComputerName primary.contoso.com -VMName Test-VM –UseBackup

     

     

     

     

    Which initial replication method do you use on your setup? We would be interested in hearing your feedback!

    The Hyper-V Team at VMworld 2013 - Fun Times and Frozen Custard

    $
    0
    0

    Many VMworld 2013 attendees looked at Hyper-V a long time ago, and haven’t kept up with the progress our platform has made in recent years. If you count yourself among this group, we at Microsoft would love to show you how far we’ve come. However, you needn’t take my word for it – I encourage you to find out for yourself.

    The Hyper-V team took their message and information to VMworld 2013 in a fun and creative way. Varun Chhabra, Senior Product Marketing Manager, Server and Tools blogs about the experience in “Planes, trucks and frozen custard - The Hyper-V Team at VMworld 2013” where he had the opportunity to talk with customers, potential customers, and members of the VMWare staff.  It is an insightful view.  Check it out!

    And for those of you interested in downloading some of the other products and trying them, here are some resources to help you:

    • Windows Server 2012 R2 Preview download
    • System Center 2012 R2 Preview download
    • SQL Server 2014 Community Technology Preview 1 (CTP1) download
    • Windows 8.1 Enterprise Preview download

    Monitoring Hyper-V Replica using System Center Operations Manager

    $
    0
    0

    Customers asked us if they can have a monitoring mechanism for Hyper-v Replica in a rainy day scenario. With System Center Operations Manager 2012 SP1, customers can now monitor Hyper-V Replica using a Management Pack available for free from the catalogue of SCOM. This blog post will deal with adding the management packs to SCOM setup to monitor Hyper-V Replica . If you haven’t completed your setup , return to this blog after setting up SCOM and installing agents.[ You can refer to Installing Operations Manager On a Single Server , Deploying SCOM for installation and Managing Agents for discovering and installing agents]

    Before we start monitoring Hyper-v Replica, we need to import necessary management packs into SCOM. SCOM catalogue provides a management pack named “Microsoft Windows Hyper-V 2012 Monitoring” to monitor the state changes of Hyper-v Replica.

    Import Management Pack

    To import this management pack,

    1. Go to “Authoring Workspace” and click on “Import management packs”. This will open “Import Management packs” form.

    clip_image003

    2. Click on “Add” and from the drop down select “Add from Catalog …”. This will open Catalog Menu.

    3. In the Find field, type “Hyper-V 2012 Monitoring” and click Search.

    clip_image005

    4. Select “Microsoft Windows Hyper-V 2012 Monitoring” and Click “Add” and then Click “OK”.

    5. If you come across a screen like below, it means that required dependent management packs are not imported. Click on “Resolve”.

    clip_image007

    6. In the Dependency Warning that pops up, Click Resolve. This action will list all the dependent management packs that needs to be imported. Click Install.

    7. Once all packs are imported, click on Close.

    You can cross verify the importing of management pack by going to Monitoring workspace and looking for “Microsoft Windows Server Hyper-V”:

    clip_image010

    To get a list of Available monitors click on, Tools->Search->Monitors and in the search field type “Replica”. This will list all 9 monitors provided by Management pack.

    The supported Monitors and the situations in which they will get triggered is summarized in the following table:

    Monitor

    Root cause

    Hyper-V 2012 Replica Windows Firewall Rule Monitor

    The Windows Firewall rule to allow replication traffic to the Replica site has not been enabled.

    Hyper-V 2012 Replication Critical Suspended state monitor.

    Network bandwidth is not sufficient to send the accumulated changes from the primary server to the replica server

    Storage subsystem on either the primary or replica site is not properly provisioned from space and IOPS perspective.

    The replication is paused on either primary or replica VM

    Hyper-V 2012 Replication Reverse Replication not initiated.

    Failover has been initiated but reverse replication to primary is not initiated.

    Replication is not enabled for failed over VM.

    Hyper-V 2012 Replication not started monitor.

    Initial replication has not been completed after setting up a replication relationship.

    Hyper-V 2012 Replica out of sync.

    Lack of network connectivity between the primary and replica servers.

    Network bandwidth is not sufficient to send the accumulated changes from the primary server to the replica server.

    Storage subsystem on either the primary or replica site is not properly provisioned from space and IOPS perspective.

    The replication between on the primary or replica VM might be paused.

    Hyper-V 2012 Node's Replication broker configuration monitor.

    Cluster service stopped unexpectedly

    Hyper-V Replica Broker is unable to come up on the destination node after a cluster migration.

    Hyper-V 2012 Replica Network Listener

    Conflict on the network port configured for Replica or SPN Registration might have failed(Kerberos)

    Certificate provided is either invalid or doesn’t meet pre-requisites(HTTPS)

    Hyper-V 2012 Replication Resync Required state monitor.

    VM went into Resync required mode.

    Hyper-V 2012 Replication Count Percent Monitor

    The replicating virtual machine has missed more than the configured percentage of replication cycles

    Viewing the properties of the monitor:

    To view the properties of the monitor, you can select the monitor (you can select results from Search and Click “View->Monitors”) and right click on the monitor and Click on “Properties”.

    General Properties: Defines Name; Gives a description of Monitor and mentions the target. It also mentions which parent monitor it belongs to. (More on Monitors, here)

    Health: Mentions the conditions which trigger the monitor health state change.

    Alerting: Settings related to Generation of Alerts are displayed here.

    Diagnostic and Recovery: User can create a diagnostic task and configure whether to run automatically or trigger manually once an alert is generated. User can also create a recovery task either in VB Script or in J Script or can create a PowerShell commandlet recovery task.

    Configuration: Mentions important parameters of Monitor’s default properties.

    Product Knowledge: This tab provides with Summary of what monitor tries to achieve, causes of state change and handful resolutions to return to Healthy State.

    clip_image012

    Changing the properties of the Monitor:

    You can control the way alerts are being generated and their triggering properties. To change the properties of a monitor, go to Monitor (you can select results from Search and Click “View->Monitors”) and click “Overrides->Override the Monitor” and select the appropriate objects for which you want to change the properties of the monitor.

    clip_image014

    After you have selected override option, you will be presented with following UI. Select the property you want to change and check the “Override” checkbox and change the properties. You can select Management pack in which you want to put up the updated monitor.

    clip_image016

    Diagnostic and Recovery Task:

    You can add a diagnostic and Recovery task for a monitor through Monitor properties UI as discussed above. To Create a Diagnostic or Recovery task, Click on “Add->Diagnostic for Critical Health State” from “Configure Diagnostic Tasks” section under Diagnostic and Recovery tab. You can either run a command or a script as a diagnostic task. You can select health state for which this will get executed and have the option of executing this command or script automatically once the monitor state has changed. You can also edit or remove previously added tasks.

    To trigger a diagnostic or recovery task manually for an alert follow these steps:

    1. Select the alert in Monitoring workspace and Click on Health Explorer.

    2. In health explorer, select the Monitor and on the right hand side, Click on “State Changes” tab.

    3. Diagnostic tasks are listed immediately after Context while Recovery Tasks can be found at the bottom of the page.

    clip_image018

    clip_image020

    Management Pack from Codeplex:

    One of our field engineers, Cristian Edwards Sabathe, has developed  a Management pack which displays the state of the replication in a dashboard. Download the pack from here.

    Once you have downloaded the pack, to import pack into SCOM follow these steps:

    1. Go to authoring workspace and click on Import Management Pack.

    2. Click on “Add” and select add from disk option.

    3. Give path of MPs to the folder to which you have downloaded packs from above link.

    4. If there are dependent management packs missing, it will report it in UI. Click on “Resolve” to import all dependency packs.

    Hyper-V Replica DashBoard:

    Hyper-V Replica dashboard will be present in Monitoring view. It is part of “Hyper-V MP Extension 2012-> Hyper V Replica” folder. Dashboard will display the source of the virtual machine and its health state using icons.

    clip_image002[9]

    Primary VMs/Recovery VMs view will show the primary VMs, their health state, replication state and replication health(1-normal;2-warning;3-critical), primary and recovery servers for the VM, mode of replication along with many other useful fields which can be customized using “Personalize view” option.

    clip_image002[7]

    Notification of alerts

    Alerts are generated whenever a state change occurs, Great!! But do I have to look through the SCOM screen 24x7 to see if an alert is generated? Fortunately, the answer is NO. SCOM provides a subscription mechanism through which user gets the alert via an email or an SMS or an IM or can raise a ticket.

    1. To create a subscription, select any alert and select “Subscription->Create” in the right hand side of the UI in Authoring workspace. This will open up Notification Subscription wizard.

    clip_image026

    2. In the wizard, specify a Name and Description to Subscription and click Next.

    3. In the Conditions box, check “created by specific rules or monitors”. In the criteria description box, click on the already existing monitor to bring up “Monitor and Rule Search” form.

    clip_image028

    4. In the Monitor and Rule search form, type “Replica” in Filter By field and click on “Search”. This will list down all 9 monitors in available rules and monitors. Select the monitors for whose alerts you want to receive a notification and add them by clicking on “Add” button. Once you have added all the desired monitors to receive notifications, click on “OK”.

    clip_image030

    5. Click “Next” In notification Subscription wizard. Complete the wizard as per the subscription requirements. You can refer to How to Create Notification Subscribers and Subscribing to Alert Notifications on how to complete the wizard.

    In Summary Management packs from Catalogue and CodePlex provide a great way to monitor the Hyper-V Replica through System Center Operations manager and will integrate with it seamlessly.

    Replicating fixed disks to dynamic disks in Hyper-V Replica

    $
    0
    0

    A recent conversation with a hosting provider using Hyper-V Replica brought an interesting question to the fore. The hosting provider’s services were aimed primarily towards Small and Medium Businesses (SMBs), with one service being DR-as-a-Service. A lot of the virtual disks being replicated were fixed, had sizes greater than 1 TB, and were mostly empty as the space had been carved out and reserved for future growth. However this situation presented a pretty problem for our hosting provider – storing a whole bunch of large and empty virtual disks eats up real resources. It also means investment in physical resources is done upfront rather than gradually/over a period of time. Surely there had to be a better way, right? Well, this wouldn’t be a very good blog post if there wasn’t a better way! :)

    A great way to trim those fat, fixed virtual disks is to convert them into dynamic disks, and use the dynamic disks on the Replica side. So the replication would happen between the SMB datacenter (fixed disk) to the hosting provider’s datacenter (dynamic disk). Dynamic disks take up only as much physical storage as is present inside the disk, making them very efficient for storage and very useful to hosting providers. The icing on the cake is that Hyper-V Replica works great in such a configuration!

    But what about the network – does this method help save any bandwidth? At the time of enabling replication, the compression option is selected by default. This means that when Hyper-V Replica encounters large swathes of empty space in the virtual disk, it is able to compress this data and then send the data across. So the good news is that excessive bandwidth usage is not a concern to begin with.

    One of the early decisions to be made is whether this change is done on the primary side by the customer, or on the replica side by the hosting provider. Asking each customer to change from fixed disks to dynamic disks would be a long drawn out process – and customers might want to keep their existing configuration. The more likely scenario is that the hosting provider will make the changes and it will be transparent to the customer that is replicating.

    So let’s deep-dive into how to make this happen.

    Converting a disk from fixed to dynamic

    This process is simple enough, and can be done through the Edit Disk wizard in the Hyper-V Manager UI. Choose the virtual disk that needs editing, choose Convert as the action to be taken, and pick Dynamically expandingas the target disk type. Continue till the end and your disk will be converted from fixed to dynamic.

    NOTE 1: An important constraint to remember is that the target disk format should be the same as the source disk format. This means that you should pick the disk format as VHD if your fixed disk has a VHD extension, and you should pick VHDX if your fixed disk has a VHDX extension.

    NOTE 2: The name of your dynamic disk should be exactly the same as the name of your fixed disk.

    Edit disk

    (The destination location has been changed so that the same filename can be kept)

    To get the same result using PowerShell, use the following command:

    PS C:\> Convert-VHD –Path c:\FixedDisk.vhdx –DestinationPath f:\FixedDisk.vhdx –VHDType Dynamic

    Making it work with Hyper-V Replica

    1. Enable replication from the customer to the hosting provider using online IR or out-of-band IR.
    2. The hosting provider waits for the IR to complete.
    3. The hosting provider can then pause the replication at any time on the Replica server – this will prevent HRL log apply on the disk while it is being converted.
    4. The hosting provider can then convert the disk from fixed to dynamic using the technique mentioned above. Ensure that there is adequate storage space to hold both disks until the process is complete.
    5. The hosting provider then replaces the fixed disk with the dynamic disk at the same path and with the same name.
    6. The hosting provider resumes replication on the Replica site.

    Now Hyper-V Replica will use the dynamic disk seamlessly and the hosting provider’s storage consumption is reduced.

    Additional optimization for out-of-band IR

    In out-of-band IR, the data is transferred to the Replica site using an external medium like a USB device. It becomes possible to convert the disk from fixed to dynamic before importing it on the Replica site. The disks on the external medium are directly used as the source and removes the need to have additional storage while the conversion operation completes (for step 4 in the above process). Thus the hosting provider can import and store only the dynamic disk.

    Do try this out and let us know the feedback!

    Hyper-V Replica BPA Rules

    $
    0
    0

    A frequent question from our customers is on whether there are standard “best practices” when deploying Hyper-V Replica (or any Windows Server role for that matter). These questions come in many avatars - Does the Product Group have any configuration gotchas based on internal testing, is my server properly configured, should I change any replication configuration etc.

    Best Practices Analyzer (BPA) is a powerful inbox tool which scans the server for any potential ‘best practice’ violations. The report describes the problem and also provides recommendation to fix the issue. You can use the BPA both from UI as well as PowerShell.

    From the Server Manager Dashboard, click on Hyper-V, scroll down to the Best Practices Analyzer option, click on Tasks, followed by Start BPA Run 

    BPA_3

    Once the scan is complete, you can filter the issues based on Warning or Errors, Excluded Results, Compliant Results.

    The same can be done through PowerShell by executing the following cmdlets

    Invoke-BpaModel -ModelId Microsoft/Windows/Hyper-V
     
    Get-BpaResult -ModelId Microsoft/Windows/Hyper-V

    To filter non-compliant rules, issue the following cmdlet

    Get-BpaResult -ModelId Microsoft/Windows/Hyper-V -Filter Noncompliant

    In a Windows Server 2012 server, the following rules constitute the Hyper-V BPA. The Hyper-V Replica specific rules are between rules 37-54.

    RuleId Title
    ------ -----
    3      The Hyper-V Virtual Machine Management Service should be configured to start automatically
    4      Hyper-V should be the only enabled role
    5      The Server Core installation option is recommended for servers running Hyper-V
    6      Domain membership is recommended for servers running Hyper-V
    7      Avoid pausing a virtual machine
    8      Offer all available integration services to virtual machines
    9      Storage controllers should be enabled in virtual machines to provide access to attached storage
    10     Display adapters should be enabled in virtual machines to provide video capabilities
    11     Run the current version of integration services in all guest operating systems
    12     Enable all integration services in virtual machines
    13     The number of logical processors in use must not exceed the supported maximum
    14     Use RAM that provides error correction
    15     The number of running or configured virtual machines must be within supported limits
    16     Second-level address translation is required when running virtual machines enabled for RemoteFX
    17     At least one GPU on the physical computer should support RemoteFX and meet the minimum requirements for DirectX when virtual machines are configured with a RemoteFX 3D video adapter
    18     Avoid installing RemoteFX on a computer that is configured as an Active Directory domain controller
    19     Use at least SMB protocol version 3.0 for file shares that store files for virtual machines.
    20     Use at least SMB protocol version 3.0 configured for continuous availability on file shares that store files for virtual machines.
    37     A Replica server must be configured to accept replication requests
    38     Replica servers should be configured to identify specific primary servers authorized to send replication traffic
    39     Compression is recommended for replication traffic
    40     Configure guest operating systems for VSS-based backups to enable application-consistent snapshots for Hyper-V Replica
    41     Integration services must be installed before primary or Replica virtual machines can use an alternate IP address after a failover
    42     Authorization entries should have distinct tags for primary servers with virtual machines that are not part of the same security group.
    43     To participate in replication, servers in failover clusters must have a Hyper-V Replica Broker configured
    44     Certificate-based authentication is recommended for replication.
    45     Virtual hard disks with paging files should be excluded from replication
    46     Configure a policy to throttle the replication traffic on the network
    47     Configure the Failover TCP/IP settings that you want the Replica virtual machine to use in the event of a failover
    48     Resynchronization of replication should be scheduled for off-peak hours
    49     Certificate-based authentication is configured, but the specified certificate is not installed on the Replica server or failover cluster nodes
    50     Replication is paused for one or more virtual machines on this server
    51     Test failover should be attempted after initial replication is complete
    52     Test failovers should be carried out at least monthly to verify that failover will succeed and that virtual machine workloads will operate as expected after failover
    53     VHDX-format virtual hard disks are recommended for virtual machines that have recovery history enabled in replication settings
    54     Recovery snapshots should be removed after failover
    55     At least one network for live migration traffic should have a link speed of at least 1 Gbps
    56     All networks for live migration traffic should have a link speed of at least 1 Gbps
    57     Virtual machines should be backed up at least once every week
    58     Ensure sufficient physical disk space is available when virtual machines use dynamically expanding virtual hard disks
    59     Ensure sufficient physical disk space is available when virtual machines use differencing virtual hard disks
    60     Avoid alignment inconsistencies between virtual blocks and physical disk sectors on dynamic virtual hard disks or differencing disks
    61     VHD-format dynamic virtual hard disks are not recommended for virtual machines that run server workloads in a production environment
    62     Avoid using VHD-format differencing virtual hard disks on virtual machines that run server workloads in a production environment.
    63     Use all virtual functions for networking when they are available
    64     The number of running virtual machines configured for SR-IOV should not exceed the number of virtual functions available to the virtual machines
    65     Configure virtual machines to use SR-IOV only when supported by the guest operating system
    66     Ensure that the virtual function driver operates correctly when a virtual machine is configured to use SR-IOV
    67     Configure the server with a sufficient amount of dynamic MAC addresses
    68     More than one network adapter should be available
    69     All virtual network adapters should be enabled
    70     Enable all virtual network adapters configured for a virtual machine
    72     Avoid using legacy network adapters when the guest operating system supports network adapters
    73     Ensure that all mandatory virtual switch extensions are available
    74     A team bound to a virtual switch should only have one exposed team interface
    75     The team interface bound to a virtual switch should be in default mode
    76     VMQ should be enabled on VMQ-capable physical network adapters bound to an external virtual switch
    77     One or more network adapters should be configured as the destination for Port Mirroring
    78     One or more network adapters should be configured as the source for Port Mirroring
    79     PVLAN configuration on a virtual switch must be consistent
    80     The WFP virtual switch extension should be enabled if it is required by third party extensions
    81     A virtual SAN should be associated with a physical host bus adapter
    82     Virtual machines configured with a virtual Fibre Channel adapter should be configured for high availability to the Fibre Channel-based storage
    83     Avoid enabling virtual machines configured with virtual Fibre Channel adapters to allow live migrations when there are fewer paths to Fibre Channel logical units (LUNs) on the destination than on the source
    106    Avoid using snapshots on a virtual machine that runs a server workload in a production environment
    107    Configure a virtual machine with a SCSI controller to be able to hot plug and hot unplug storage
    108    Configure SCSI controllers only when supported by the guest operating system
    109    Avoid configuring virtual machines to allow unfiltered SCSI commands
    110    Avoid using virtual hard disks with a sector size less than the sector size of the physical storage that stores the virtual hard disk file
    111    Avoid configuring a child storage resource pool when the directory path of the child is not a subdirectory of the parent
    112    Avoid mapping one storage path to multiple resource pools.

     

    Go ahead and run the BPA, you might learn something interesting from the non-compliant rules! Fix the errors which are reported as part of the non-compliant rules and re-run the rules. The BPA scan is non-intrusive and should not impact your production workload.

    Replica Clusters behind a NAT

    $
    0
    0

    When a Hyper-V Replica Broker is configured in your DR site to accept replication traffic, Hyper-V along with Failover Clustering intelligently percolates these settings to all the nodes of the clusters. A network listener is started in each node of the cluster on the configured port.

    image

     

    While this seamless configuration works for a majority of our customers, we have heard from customers on the need to bring up the network listener in different ports in each of the replica server (eg: port 8081 in R1.contoso.com, port 8082 in R2.contoso.com and so on). One such scenario is around placing a NAT in front of the Replica cluster which has port based rules to redirect traffic to appropriate servers.

    Before going any further, a quick refresher on how the placement logic and traffic redirection happens in Hyper-V Replica.

    1) When the primary server contacts the Hyper-V Replica Broker, it (the broker) finds a replica server on which the replica VM can reside and returns the FQDN of the replica server (eg: R3.contoso.com) and the port to which the replication traffic needs to be sent.

    2) Any subsequent communication happens between the primary server and the replica server (R3.contoso.com) without the Hyper-V Replica Broker’s involvement.

    3) If the VM migrates from R3.contoso.com to R2.contoso.com, the replication between the primary server and R3.contoso.com fails as the VM is unavailable on R3.contoso.com. After retrying a few time, the primary server contacts the Hyper-V Replica Broker indicating that it is unable to find the VM on the replica server (R3.contoso.com). In response, the Hyper-V Replica broker looks into the cluster and returns the information that the replica-VM now resides in R2.contoso.com. It also provides the port number as part of this response. Replication is now established to R2.contoso.com.

    It’s worth calling out that the above steps happen without any manual intervention.

    In a NAT environment where port-based-address translation is used (i.e traffic is routed to a particular server based on the destination ports) the above communication mechanism fails. This is due to the fact that the network listener on each of the servers (R1, R2,..Rn.contoso.com) comes up on the same port. As the Hyper-V Replica broker returns the same port number in each of it’s response (to the primary server), any incoming request which hits the NAT server cannot be uniquely identified.

    Needless to say, if there is an one to one mapping between the ‘public’ IP address exposed by the NAT and the ‘private’ IP address of the servers (R1, R2…Rn.contoso.com), the default configuration works fine.

    So, how do we address this problem – Consider the following 3 node cluster with the following names and IP address: R1.contoso.com @ 192.168.1.2, R2.contoso.com @ 192.168.1.3 and R3.contoso.com @ 192.168.1.4.

    1) Create the Hyper-V Replica Broker resource using the following cmdlets with a static IP address of your choice (192.168.1.5 in this example)

    $BrokerName = “HVR-Broker”
    Add-ClusterServerRole -Name $BrokerName –StaticAddress 192.168.1.5
    Add-ClusterResource -Name “Virtual Machine Replication Broker” -Type "Virtual Machine Replication Broker" -Group $BrokerName
    Add-ClusterResourceDependency “Virtual Machine Replication Broker” $BrokerName
     
    Start-ClusterGroup $BrokerName

    2) Hash table of server name, port: Create a hash table map table of the server name and the port on which the listener should come up in the particular server.

    $portmap=@{"R1.contoso.com"=8081; “R2.contoso.com"=8082; "R3.contoso.com"=8003, “HVR-Broker.contoso.com”=8080}

    3) Enable the replica server to receive replication traffic by providing the hash table as an input

    Set-VMReplicationServer -ReplicationEnabled $true -ReplicationAllowedFromAnyServer $true
    -DefaultStorageLocation "C:\ClusterStorage\Volume1"
    -AllowedAuthenticationType Kerberos
    -KerberosAuthenticationPortMapping $portmap

    4) NAT Table: Configure the NAT device with the same mapping as provided in the enable replication server cmdlet. The below picture is applicable for a RRAS based NAT device – similar configuration can be done in any vendor of your choice. The screen shot below captures the mapping for the Hyper-V Replica Broker. Similar mapping needs to be done for each of the replica servers.

             image

    5) Ensure that the primary server can resolve the replica servers and broker to the public IP address of the NAT device and ensure that the appropriate firewall rules have been enabled.

    That’s it – you are all set! Replication works seamlessly as before and now you have the capability to reach the Replica server in a port based NAT environment.

    What’s new in Hyper-V Replica in Windows Server 2012 R2

    $
    0
    0

    18th October 2013 marked the General Availability of Windows Server 2012 R2. The teams have accomplished an amazing set of features in this short release cycle and Brad’s post @ http://blogs.technet.com/b/in_the_cloud/archive/2013/10/18/today-is-the-ga-for-the-cloud-os.aspx captures the investments made across the board. We encourage you to update to the latest version and share your feedback.

    This post captures the top 8 improvements done to Hyper-V Replica in Windows Server 2012 R2. We will be diving deep into each of these features in the coming weeks through blog posts and TechNet articles.

    Seamless Upgrade

    You can upgrade from Windows Server 2012 to Windows Server 2012 R2 without having to re-IR your protected VMs. With new features such as cross-version live migration, it is easy to maintain your DR story across OS upgrades. You can also choose to upgrade your primary site and replica site at different times as Hyper-V Replica will replicate your virtual machines from a Windows Server 2012 environment to a Windows Server 2012 R2 environment.        

    30 second replication frequency

    Windows Server 2012 allowed customers to replicate their virtual machines at a preset 5minute replication frequency. Our aspirations to bring down this replication frequency was backed by customer’s asks on providing the flexibility to set different replication frequencies to different virtual machines. With Windows Server 2012 R2, you can now asynchronously replicate your virtual machines at either 30second, 5mins or 15mins frequency.

    30sec

    Additional Recovery Points

    Customers can now have a longer retention with 24 recovery points. These 24 (up from 16 in Windows Server 2012) recovery points are spaced at an hour’s interval.        
           
    image

    Linux guest OS support

    Hyper-V Replica, since it’s first release has been agnostic to the application and guest OS. However certain capabilities were unavailable on non-Windows guest OS in it’s initial avatar. With Windows Server 2012 R2, we are tightly integrated with non-Windows OS to provide file-system consistent snapshots and inject IP addresses as part of the failover workflow.        

    Extended Replication

    You can now ‘extend’ your replica copy to a third site using the ‘Extended replication’ feature. The functionality provides an added layer of protection to recover from your disaster. You can now have a replica copy within your site (eg: ClusterA->ClusterB in your primary datacenter) and extend the replication for the protected VMs from ClusterB->ClusterC (in your secondary data center).
           
    image    

    To recover from a disaster in ClusterA, you can now quickly failover to the VMs in ClusterB and continue to protect them to ClusterC. More on extended replication capabilities in the coming weeks.        

    Performance Improvements

    Significant architectural investments were made to lower the IOPS and storage resources required on the Replica server. The most important of these was to move away from snapshot-based recovery points to “undo logs” based recovery points. These changes have a profound impact on the way the system scales up and consumes resources, and will be covered in greater detail in the coming weeks.

    Online Resize

    In Windows Server 2012 Hyper-V Replica was closely integrated with the various Hyper-V features such as VM migration, storage migration etc. Windows Server 2012 R2 allows you to resize a running VM and if your VM is protected – you can continue to replicate the virtual machine without having to re-IR the VM.        

    Hyper-V Recovery Manager

    We are also excited to announce the paid preview of Hyper-V Recovery Manager (HRM)(http://blogs.technet.com/b/scvmm/archive/2013/10/21/announcing-paid-preview-of-windows-azure-hyper-v-recovery-manager.aspx). This is a Windows Azure Service that allows you to manage and orchestrate various DR workflows between the primary and recovery datacenters. HRM does *not* replicate virtual machines to Windows Azure – your data is replicated directly between the primary and recovery datacenter. HRM is the disaster recovery “management head” which is offered as a service on Azure.

    Viewing all 220 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>