Quantcast
Channel: Windows Virtualization Team Blog
Viewing all 220 articles
Browse latest View live

TechEd North America 2014

$
0
0

This year TechEd North America is happening between 12 May and 15 May, at Houston. There are some interesting sessions around Backup and Disaster Recovery – so I would highly encourage you all to attend these sessions and interact with the folks presenting.

Sessions on May 12image

image

Sessions on May 15

image

image

image

 

Looking forward to seeing you all there!


See you at TechEd North America!

$
0
0

Here's a quick intro to the Hyper-V people attending TechEd North America in Houston next week.  I'm also posting booth times for each of us.

The booth's official name is: Datacenter & Infrastructure Management: Cloud & Datacenter Infrastructure Solutions

It will be at the center of the Expo Hall.

 

*Note* 

These are the times we have to be at a booth.  Chances are we'll be there beyond these hours.

 

Sam Chandrashekar

Wednesday                        3:00 PM - 6:00 PM

Thursday                             10:45 AM - 12:45 PM

 

Patrick Lang

Wednesday                        3:00 PM - 6:00 PM

Thursday                             10:45 AM - 12:45 PM

 

Taylor Brown

Monday                               10:15 AM - 12:15 PM

 

Sarah Cooley -- I'm on loan to the Windows team. Come talk to me next door at the Windows, Phone, & Devices block: Mobility

Monday                               5:45 PM-8:30 PM

Tuesday                               10:45 AM-12:30 PM

                                                2:15 PM-4:00 PM

Wednesday                        10:45 AM-1:00 PM

                                                3:00 PM-6:00 PM

Thursday                             10:45 AM-12:45 PM

 

Ben Armstrong

Monday                               10:15 AM-12:15 PM

Tuesday                               10:45 AM-12:30 PM

Wednesday                        12:45 PM-3:15 PM

Thursday                             10:45 AM-12:45 PM

 

Jeff Woolsey - running around

 

Again, feel free to stop by and talk to us. Looking forward to seeing you at TechEd.

 


[From left to right: Taylor, Sam, Ben, Sarah, Jeff, Patrick]

 

Cheers,

Sarah

Excluding virtual disks in Hyper-V Replica

$
0
0

Since its introduction in Windows Server 2012, Hyper-V Replica has provided a way for users to exclude specific virtual disks from being replicated. This option is rarely exercised but can have a significant benefits when used correctly. This blog post covers the disk exclusion scenarios and the impact this has on the various operations done during the lifecycle of VM replication. This blog post has been co-authored by Priyank Gaharwar of the Hyper-V Replica test team.

Why exclude disks?

Excluding disks from replication is done because:

  1. The data churned on the excluded disk is not important or doesn’t need to be replicated    (and)
  2. Storage and network resources can be saved by not replicating this churn

Point #1 is worth elaborating on a little. What data isn't “important”? The lens used to judge the importance of replicated data is its usefulness at the time of Failover. Data that is not replicated should also not be needed at the time of failover. Lack of this data would then also not impact the Recovery Point Objective (RPO) in any material way.

There are some specific examples of data churn that can be easily identified and are great candidates for exclusion – for example, page file writes. Depending on the workload and the storage subsystem, the page file can register a significant amount churn. However, replicating this data from the primary site to the replica site would be resource intensive and yet completely worthless. Thus the replication of a VM with a single virtual disk having both the OS and the page file can be optimized by:

  1. Splitting the single virtual disk into two virtual disks – one with the OS, and one with the page file
  2. Excluding the page file disk from replication

How to exclude disks

Application impact - isolating the churn to a separate disk

The first step in using this feature is to first isolate the superfluous churn on to a separate virtual disk, similar to what is described above for page files. This is a change to the virtual machine and to the guest. Depending on how your VM is configured and what kind of disk you are adding (IDE, SCSI) you may have to power off your VM before any changes can be made.

At the end, an additional disk should surface up in the guest. Appropriate configuration changes should be done in the application to change the location of the temporary files to point to the newly added disk.

Figure 1:  Changing the location of the System Page File to another disk/volumeimage

Excluding disks in the Hyper-V Replica UI

Right-click on a VM and select “Enable Replication…”. This will bring up the wizard that walks you through the various inputs required to enable replication on the VM. The screen titled “Choose Replication VHDs” is where you deselect the virtual disks that you do not want to replicate. By default, all virtual disks will be selected for replication.

Figure 2:  Excluding the page file virtual disk from a virtual machineimage

Excluding disks using PowerShell

The Enable-VMReplication commandlet provides two optional parameters: –ExcludedVhd and–ExcludedVhdPath. These parameters should be used to exclude the virtual disks at the time of enabling replication.

PS C:\Windows\system32> Enable-VMReplication -VMName SQLSERVER -ReplicaServerName repserv01.contoso.com -AuthenticationType Kerberos -ReplicaServerPort 80 -ExcludedVhdPath 'D:\Primary-Site\Hyper-V\Virtual Hard Disks\SQL-PageFile.vhdx'

After running this command, you will be able to see the excluded disks under VM Settings> Replication> ReplicationVHDs.

Figure 3:  List of disks included for and excluded from replication image

Impact of disk exclusion

Enable replicationA placeholder disk (for use during initial replication) is not created on the Replica VM. The excluded disk doesn’t exist on the replica in any form.
Initial replicationThe data from the excluded disks are not transferred to the replica site.
Delta replicationThe churn on any of the excluded disks is not transferred to the replica site.
FailoverThe failover is initiated without the disk that has been excluded. Applications that refer to the disk/volume in the guest will have their configurations incorrect.

For page files specifically, if the page file disk is not attached to the VM before VM boot up then the page file location is automatically shifted to the OS disk.
ResynchronizationThe excluded disk is not part of the resynchronization process.

Ensuring a successful failover

Most applications have configurable settings that make use of file system paths. In order to run correctly, the application expects these paths to be present. The key to a successful failover and an error-free application startup is to ensure that the configured paths are present where they should be. In the case of file system paths associated with the excluded disk, this means updating the Replica VM by adding a disk - along with any subfolders that need to be present for the application to work correctly.

The prerequisites for doing this correctly are:

  • The disk should be added to the Replica VM before the VM is started. This can be done at any time after initial replication completes, but is preferably done immediately after the VM has failed over.
  • The disk should be added to the Replica VM with the exact controller type, controller number, and controller location as the disk has on the primary.

There are two ways of making a virtual disk available for use at the time of failover:

  1. Copy the excluded disk manually (once) from the primary site to the replica site
  2. Create a new disk, and format it appropriately (with any folders if required)

When possible, option #2 is preferred over option #1 because of the resources saved from not having to copy the disk. The following PowerShell script can be used to green-light option #2, focusing on meeting the prerequisites to ensure that the Replica VM is exactly the same as the primary VM from a virtual disk perspective:

param (
    [string]$VMNAME,
    [string]$PRIMARYSERVER
)
 
## Get VHD details from primary, replica
$excludedDisks = Get-VMReplication -VMName $VMNAME -ComputerName $PRIMARYSERVER | select ExcludedDisks
$includedDisks = Get-VMReplication -VMName $VMNAME | select ReplicatedDisks
if( $excludedDisks -eq $null ) {
exit
}
 
#Get location of first replica VM disk
$replicaPath = $includedDisks.ReplicatedDisks[0].Path | Split-Path -Parent
 
## Create and attach each excluded disk
foreach( $exDisk in $excludedDisks.ExcludedDisks )
{
#Get the actual disk object
    $pDisk = Get-VHD -Path $exDisk.Path -ComputerName $PRIMARYSERVER
    $pDisk
#Create a new VHD on the Replica
    $diskpath = $replicaPath + "\" + ($pDisk.Path | Split-Path -Leaf)
    $newvhd = New-VHD -Path $diskpath `
                      -SizeBytes $pDisk.Size `
                      -Dynamic `
                      -LogicalSectorSizeBytes $pDisk.LogicalSectorSize `
                      -PhysicalSectorSizeBytes $pDisk.PhysicalSectorSize `
                      -BlockSizeBytes $pDisk.BlockSize `
                      -Verbose
    if($newvhd -eq $null) 
    {
        Write-Host "It is assumed that the VHD [" ($pDisk.Path | Split-Path -Leaf) "] already exists and has been added to the Replica VM [" $VMNAME "]"
continue;
    }
 
#Mount and format the new new VHD
    $newvhd | Mount-VHD -PassThru -verbose `
            | Initialize-Disk -Passthru -verbose `
            | New-Partition -AssignDriveLetter -UseMaximumSize -Verbose `
            | Format-Volume -FileSystem NTFS -Confirm:$false -Force -verbose `
#Unmount the disk 
    $newvhd | Dismount-VHD -Passthru -Verbose
 
#Attach disk to Replica VM
    Add-VMHardDiskDrive -VMName $VMNAME `
                        -ControllerType $exDisk.ControllerType `
                        -ControllerNumber $exDisk.ControllerNumber `
                        -ControllerLocation $exDisk.ControllerLocation `
                        -Path $newvhd.Path `
                        -Verbose
}

The script can also be customized for use with Azure Hyper-V Recovery Manager, but we’ll save that for another post!

Capacity Planner and disk exclusion

The Capacity Planner for Hyper-V Replica allows you to forecast your resource needs. It allows you to be more precise about the replication inputs that impact the resource consumption – such as the disks that will be replicated and the disks that will not be replicated.

Figure 4:  Disks excluded for capacity planningimage

Key Takeaways

  1. Excluding virtual disks from replication can save on storage, IOPS, and network resources used during replication
  2. At the time of failover, ensure that the excluded virtual disk is attached to the Replica VM
  3. In most cases, the excluded virtual disk can be recreated on the Replica side using the PowerShell script provided

Hyper-V Replica trouble-shooting wiki

$
0
0

We are happy to announce the availability of Hyper-V Replica trouble-shooting Wiki here:

http://social.technet.microsoft.com/wiki/contents/articles/21948.hyper-v-replica-troubleshooting-guide.aspx

This guide contains links and resources to trouble-shoot some common Hyper-V Replica failure scenarios. We will be updating the guide over time!

We would like this to be a community effort to make it social and you are free to add the content to this guide.To add content follow this high level schema for the new articles [Please feel free to add other sections as appropriate]:

a. Error Messages/Event Viewer details– This section mentions what error messages customer will see on UI/PS/WMI and what event viewer messages are logged.

b. Possible Causes– This section explains list of scenarios(one or more) which might have led to failure.

c. Resolution– This section lists down the actions admin has to take in his environment to resolve the failure

d. Additional resources– List of blogs/KB Articles/Documentation/other articles which contain more information for the customer about the failure.

If you are new to TechNet wiki, the guide on “How to contribute” is here.

Happy WIKI’ing Smile

Replication Health Mailer

$
0
0

One of our Engineers, Sangeeth, has come up with a nifty PowerShell script which mails the replication health in a host or  in a cluster in a nice dashboard format. We thought it would be of help to our customers to get the status of the replicating VMs and their foot print on CPU and in Memory. You can download the script here.

The sample output from the script looks like this. You can add as many recipients as you wish Smile

Capture

On a cluster, you can run this script on one of the cluster nodes to get information about all Cluster VMs. You can even run this script to get information from remote host and remote Cluster using “HostorClusterName” parameter. In case of cluster use “isCluster” parameter to tell the script to get information from Cluster rather than on the local node.

Isn’t it simple and easy to get the replication information about VMs?

Upcoming Preview of 'Disaster Recovery to Azure' Functionality in Hyper-V Recovery Manager

$
0
0

In the coming weeks, we will Preview functionality within Hyper-V Recovery Manager to enable Microsoft Azure as a Disaster Recovery point for virtualized workloads. The new functionality will add support for secure and seamless management of failover and failback operations using Azure IaaS Virtual Machines, thereby enabling our customers to save precious CAPEX and ongoing OPEX incurred in managing a secondary site for Disaster Recovery. Our enhanced DRaaS offering further delivers on our promise of democratizing Disaster Recovery and of making it available to everyone, everywhere. Hyper-V Recovery Manager provides enterprise-scale Disaster Recovery using a sing-click failover in the event of a disaster to an alternate enterprise data center or to an IaaS VM in Microsoft Azure. Application and Site Level Disaster Recovery is delivered via automation of overall DR workflow, smart networking, and frequent testing using DR Drills.

 

We announced the Preview during TechEd 2014. For more details about the upcoming Preview and existing Hyper-V Recovery Manager functionality, check out the DCIM-B322 session recording.

Application consistent recovery points with Windows Server 2008/2003 guest OS

$
0
0

I recently had a conversation with a customer around a very interesting problem, and the insights that were gained there are worth sharing. The issue was about VSS errors popping up in the guest event viewer while Hyper-V Replica reported the successful creation of application-consistent (VSS-based) recovery points.

Deployment details

The customer had the following setup that was throwing errors:

  1. Primary site:   Hyper-V Cluster with Windows Server 2012 R2
  2. Replica site:   Hyper-V Cluster with Windows Server 2012 R2
  3. Virtual machines:   SQL server instances with SQL Server 2012 SP1, SQL Server 2005, and SQL Server 2008

At the time of enabling replication, the customer selected the option to create additional recovery points and have the “Volume Shadow Copy Service (VSS) snapshot frequency” as 1 hour. This means that every hour the VSS writer of the guest OS would be invoked to take an application-consistent snapshot.

Symptoms

With this configuration, there was a contradiction in the output – the guest event viewer showed errors/failure during the VSS process, while the Replica VM showed application-consistent points in the recovery history.

Here is an example of the error registered in the guest:

SQLVM: Loc=SignalAbort. Desc=Client initiates abort. ErrorCode=(0). Process=2644. Thread=7212. Client. Instance=. VD=Global\*******
 
BACKUP failed to complete the command BACKUP DATABASE model. Check the backup application log for detailed messages.
 
BackupVirtualDeviceFile::SendFileInfoBegin:  failure on backup device '{********-63**-49**-BA**-5DB6********}1'. Operating system error 995(error not found).

Root cause and Dealing with the errors

The big question was:  Why was Hyper-V Replica showing application-consistent recovery points if there are failures?

The behavior seen by the customer is a benign error caused because of the interaction between Hyper-V and VSS, especially for older versions of the guest OS. Details about this can be found in the KB article here: http://support.microsoft.com/kb/2952783

The Hyper-V requestor explicitly stops the VSS operation right after the OnThaw phase. While this ensures application-consistency of the writes going to the disk, it also results in the VSS errors being logged. Meanwhile, Hyper-V returns the consistency correctly to Hyper-V Replica, which in turn makes sure that the recovery side shows application-consistent points.

A great way to validate whether the recovery point is application-consistent or not is to do a test failover on that recovery point. After the VM has booted up, the event viewer logs will have events pertaining to a rollback - and this would mean that the point is not application consistent.

Key Takeaways

  1. All in all, you can rest assured that in the case of VMs with older operating systems, Hyper-V Replica is correctly taking an application-consistent snapshot of the virtual machine.
  2. Although there are errors seen in the guest, they are benign and having a recovery history with application-consistent points is an expected behavior.

Disaster Recovery to Microsoft Azure – Part 1

$
0
0

Drum roll please!

We are super excited to announce the availability of the preview bits of Azure Site Recovery (ASR) which enables you to replicate Hyper-V VMs to Microsoft Azure for business continuity and disaster recovery purposes.

You can now protect, replicate, and failover VMs directly to Microsoft Azure – our guarantee remains that whether you enable Disaster Recovery across On-Premise Enterprise Private Clouds or directly to Azure, your virtualized workloads will be recovered accurately, consistently, withminimal downtime and with minimal data loss.

ASR supports Automated Protection and Replication of VMs, customizable Recovery Plans that enable One-Click Recovery, No-Impact Recovery Plan Testing (ensures that you meet your Audit and Compliance requirements), and best-in-class Security and Privacy features that offer maximum resilience to your business critical applications. All this with minimal cost and without the need to invest in a recovery datacenter. To know more about this announcement and what we have enabled in the Preview, check outBrad Anderson’s In the Cloud blog.

We will cover this feature in detail in the coming weeks – stay tuned and try out the feature. We love to hear your feedback!


Disaster Recovery to Microsoft Azure – Part 2

$
0
0

 

Continuing from the previous blog - check out the recent TechEd NA 2014 talk @ https://channel9.msdn.com/Events/TechEd/NorthAmerica/2014/DCIM-B322 which includes a cool demo of this product.

Love it??? Talk about it, try it and share your comments.

Let’s retrace the journey - in Jan 2014, we announced the General Availability of Hyper-V Recovery Manager(HRM). HRM  enabled customers to co-ordinate protection and recovery of virtualized workloads between SCVMM managed clouds. Using this Azure service, customers could setup, monitor and orchestrate protection and recovery of their Virtual Machines on top of Windows Server 2012, WS2012 R2 Hyper-V Replica.

Like Hyper-V Replica, the solution works great when our customers had a secondary location. But what if it isn’t the case. After all, the CAPEX and OPEX cost of building and maintaining multiple datacenters is high. One of the common questions/suggestions/feedback to our team was around using Azure as a secondary data center. Azure provides a world class, reliable, resilient platform – at a fraction of a cost compared to running your workloads or in this case, maintaining a secondary datacenter.

The rebranded HRM service - Azure Site Recovery (ASR) - delivers this capability. On 6/19, we announced the availability of the preview version of ASR which orchestrates, manages and replicates VMs to Azure.

When a disaster strikes the customer’s on-premises, ASR can “failover” the replicated VMs in Azure.

And once the customer recovers the on-premises site, ASR can “failback” the Azure IaaS VMs to the customer’s private cloud. We want you to decide which VM runs where and when!

There is some exciting technology built on top of Azure which enables the scenario and in the coming weeks we will dive deep into the workflows and the technology.

Top of my head, the key features in the product are:

  • Replication from a System Center 2012 R2 Virtual Machine Manager cloud to Azure – From a SCVMM 2012 R2 managed private cloud, any VM (we will cover some caveats in subsequent blogs) running on Windows Server 2012 R2 hypervisor can be replicated to Azure.

  • Replication frequency of 30seconds, 5mins or 15mins – just like the on-premises product, you can replicate to Azure at 30seconds.

  • Additional 24 additional recovery points to choose during failover – You can configure upto 24 additional recovery points at an hourly granularity.

 

  • Encryption @ Rest: You got to love this – we encrypt the data *before* it leaves your on-premises server. We never decrypt the payload till you initiate a failover. You own the encryption key and it’s safe with you.

  • Self-service DR with Planned, Unplanned and Test Failover– Need I say more – everything is in your hands and at your convenience.

  • One click app-level failover using Recovery Plans
  • Audit and compliance reporting
  • .…and many more!

The documentation explaining the end to end workflows is available @ http://azure.microsoft.com/en-us/documentation/articles/hyper-v-recovery-manager-azure/ to help you get started.

The landing page for this service is @ http://azure.microsoft.com/en-us/services/site-recovery/

If you have questions when using the product, post them @ http://social.msdn.microsoft.com/Forums/windowsazure/en-US/home?forum=hypervrecovmgr or in this blog.

Keep watching this blog space for more information on this capability.

Azure Site Recovery - FAQ

$
0
0

Quick post to clarify some frequently asked questions on the newly announced Azure Site Recovery service which enables you to protect your Hyper-V VMs to Microsoft Azure. The FAQ will not address every feature capability - it should help you get started.

Q1: Did you just change the name from Hyper-V Recovery Manager to Azure Site Recovery?

A: Nope – we did more than that. Yes, we rebranded Hyper-V Recovery Manager to Azure Site Recovery (ASR) but we also brought in a bunch of new features. This includes the much awaited capability to replicate virtual machines (VMs) to Microsoft Azure. With this feature, ASR now orchestrates replication and recovery between private cloud to private cloud as well as private cloud to Azure.

Q2: What did you GA in Jan 2014?…

A: In Jan 2014, we announced the general availability of Hyper-V Recovery Manager (HRM) which enabled you to manage, orchestrate protection & recovery workflows of *your* private clouds. You (as a customer) owned both the primary and secondary datacenter which was managed by SCVMM. Built on top of Windows Server 2012/R2 Hyper-V Replica, we offered a cloud integrated Disaster Recovery Solution.

Q3: HRM was an Azure service but data was replicated between my datacenters? And this continues to work?

A: Yes on both counts. The service was being used to provide the “at-scale” protection & recovery of VMs.

Q4: What is in preview as of June 2014 (now)?

A: The rebranded service now has an added capability to protect VMs to Azure (=> Azure is your secondary datacenter). If your primary machine/server/VM is down due to a planned/unplanned event, you can recover the replicated VM in Azure. You can also bring back (or failback) your VM to your private cloud once it’s recovered from a disaster.

Q5: Wow, so I don’t need a secondary datacenter?

A: Exactly. You don’t need to invest and maintain a secondary DC. You can reap the benefits of Azure’s SLAs by protecting your VMs on Azure. The replica VM does *NOT* run in Azure till you initiate a failover.

Q6: Where is my data stored?

A: Your data is stored in *your* storage account on top of world class geo-redundant storage provided by Azure.

Q7: Do you encrypt my replica data?

A: Yes. You can also optionally encrypt the data. You own & manage the encryption key. Microsoft never requires them till you opt to failover the VM in Azure.

Q8: And my VM needs to be part of a SCVMM cloud?

A: Yes. For the current preview release, we need your VMs to be part of a SCVMM managed cloud. Check out the benefits of SCVMM @ http://technet.microsoft.com/en-us/library/dn246490.aspx

Q9: Can I protect any guest OS?

A: Your protection and recovery strategy is tied to Microsoft Azure’s supported operating systems. You can find more details in http://msdn.microsoft.com/en-us/library/azure/dn469078.aspx under the “Virtual Machines support – on premises to Azure” section.

Q10: Ok, but what about the host OS on-premises?

A: For the current preview release, the host OS should be Windows Server 2012 R2.

In summary, you can replicate any supported Windows and Linux SKU mentioned in Q9 running on top of a Windows Server 2012 R2 Hyper-V server.

Q11: Can I replicate Gen-2 VMs on Windows Server 2012 R2?

A: For the preview release, you can protect only Generation 1 VMs. Trying to protect a Gen-2 VM will fail with an appropriate error message.

Q12: Is the product guest-agnostic or should I upload any agent?

A: The on-premises technology is built on top of Windows Server 2012 Hyper-V Replica which is guest, workload and storage agnostic.

Q13: What about disks and disk geometries?

A: We support all combinations of VHD/x with fixed, dynamic, differencing.

Q14: Any restrictions on the size of the disks?

A: There are certain restrictions on the size of the disks of IaaS VMs on Azure – primary being:

Azure is a rapidly evolving platform and these restrictions are applicable as of June 2014.

Q15: Any gotchas with network configuration or memory assigned to the VM?

A: Just like the previous question, when you failover your VM, you will be bound by Azure’s offerings/features of IaaS VMs. As of today, Azure supports one network adapter and upto 112GB (in the A9 VM). The product does not put a hard-block in case you have a different network and/or memory configuration on-premises. You can change the parameters with which a VM can be created in the Azure portal under the Recovery Services option.

Q16: Where can I find information about the product, pricing etc?

A: To know more about the Azure Site Recovery, pricing, documentation; visit http://azure.microsoft.com/en-us/services/site-recovery/

Q17: Is there any document explaining the workflows?

A: You can refer to the getting-started-guide @ http://azure.microsoft.com/en-us/documentation/articles/hyper-v-recovery-manager-azure/ or post a question in our forums (see below)

Q18: I faced some errors when using the product, is there any MSDN forum where I can post my query.

A: Yes, please post your questions, queries @ http://social.msdn.microsoft.com/Forums/windowsazure/en-US/home?forum=hypervrecovmgr

Q19: But I really feel strongly about some of the features and I would like to share my feedback with the PG. Can I comment on the blog?

A: We love to hear your feedback and feel free to leave your comments in any of our blog articles. But a more structured approach would be to post your suggestions @ http://feedback.azure.com/forums/256299-site-recovery

Q20: Will you build everything which I suggest?

A: Of course…not :) But on a serious note – we absolutely love to hear from you. So don’t be shy with your feedback.

Azure Site Recovery – case of the “network connection failure”

$
0
0

Luís Caldeira is one of our early adopters who had pinged us with an interesting error. Thanks for reaching out to us Luís and sharing the details of your setup. I am sure this article will come handy to folks who hit this error at some point.

Some days back, Luís sent us a mail informing that his enable-protection workflow was consistently failing with a “network connection failure” error message. He indicated that he had followed the steps listed in the tutorial (http://azure.microsoft.com/en-us/documentation/articles/hyper-v-recovery-manager-azure/). He had:

  • Setup SCVMM 2012 R2
  • Created the Site Recovery vault, uploaded the required certificate
  • Installed & configured the Microsoft Azure Site Recovery Provider in the VMM server
  • Registered the VMM server
  • And finally installed the Microsoft Azure Recovery Services agent in each of his Hyper-V servers.

He was able to view his on-prem cloud in the Azure portal and could configure protection policies on it as well. However, when he tried to enable protection on a VM, the workflow failed and he saw the following set of tasks in the portal:

image

Clicking on ‘Error Details’ showed the following information:

image 

Hmm, not too helpful? Luís thought as much as he reached out to us with the information through our internal DL. We did some basic debugging by looking at the Hyper-V VMMS event viewer logs and the Microsoft Azure Recovery Services event viewer log. Both of them pointed to a failure in the network with the following error message”

image

A snip of the error message (after removing the various Id’s): “The error message read “Could not replicate changes for virtual machine VMName due to a network communication failure. (Virtual Machine ID VMid, Data Source ID sourceid, Task ID taskid)”

The message was less cryptic but still did not provide a solution. The network connection from the Hyper-V server seemed okay as Luis was able to access different websites from the box. He was able to TS into other servers, firewall looked ok and inbound connection looked good as well. The Azure portal was able to enumerate the VMs running on the Hyper-V server – but the enable replication call was failing.

You are bound to see more granular error messages @ C:\Program Files\Microsoft Azure Recovery Services Agent\Temp\CBEngineCurr.errlog  and we proceeded to inspect that file. The trace indicated that the name resolution to the Azure service happened as expected but “the remote server was timing out (or) connection did not happen”

Ok, so DNS was ruled out as well. We asked Luis to help us understand the network elements in his setup and he indicated that he had a TMG proxy server. We logged into the proxy server and enabled real time logs in the TMG proxy server. We retried the workflow and the workflow promptly failed – but interestingly, the proxy server did not register any traffic blip. That was definitely odd. So browsing from the server worked but connection to the service was failed. Hmm.

But the lack of activity in the TMG server indicated a local failure atleast. We were not dealing with an Azure service side issue and that ruled out 50% of potential problems. At a high level, the agent (Microsoft Azure Recovery Services) which is installed in the Hyper-V server acts as a “data mover” to Azure. It is also responsible for all the authentication and connection management when sending replica data to Azure. This component was built on top of a previously released component of the Windows Azure Online Backup solution and enhanced to support this scenario.

The good news is that the agent is quite network savvy and has a bunch of configurations to tinker around. One such configuration is the proxy server which is got by opening the “Microsoft Azure Backup” mmc. Click on the “Change properties” in the Actions menu.

image

We clicked on the “Proxy configuration” tab to set the proxy details in Luís’s setup.

image

After setting the proxy server, we retried the workflow… and it failed yet again. Luis then indicated that he was using an authenticated proxy server. Now things got interesting – as the Microsoft Azure Recovery Services agent runs in System context (unlike, say IE which runs in the user context), we needed to set the proxy authentication parameters. In the same proxy configuration page as above, we now provided the user id and password.

image

Now, when we retried the replication - voila! the workflow went through and initial replication was on it’s way. The same can be done using the Set-OBMachineSetting cmdlet (http://technet.microsoft.com/en-us/library/hh770409.aspx)

Needless to say, once the issue was fixed, Luís took the product out on a full tour and he totally loved it (ok, I just made up the last part).

I encourage you to try out ASR and share your feedback. It’s extremely easy to set it up and provides a great cloud based DR solution.

You can find more details about the service @ http://azure.microsoft.com/en-us/services/site-recovery/. The documentation explaining the end to end workflows is available @ http://azure.microsoft.com/en-us/documentation/articles/hyper-v-recovery-manager-azure/. And if you have questions when using the product, post them @ http://social.msdn.microsoft.com/Forums/windowsazure/en-US/home?forum=hypervrecovmgr or in this blog. You can also share your feedback on your favorite features/gaps @ http://feedback.azure.com/forums/256299-site-recovery

Out-of-band Initial Replication (OOB IR) and Deduplication

$
0
0

A recent conversation with a customer brought out the question:   What is the best way to create an entire Replica site from scratch? At the surface this seems simple enough – configure initial replication to send the data over the network for the VMs one after another in sequence. For this specific customer however, there were some additional constraints placed:

  1. The network bandwidth was less than 10Mbps and it primarily catered to their daily business needs (email etc…). Adding more network was not possible within their budget. This came as quite a surprise because despite the incredible download speeds that are encountered these days, there are still places in the world where it isn't as cost effective to purchase those speeds.
  2. The VMs were of size between 150GB and 300GB each. This made it rather impractical to send the data over the wire. In the best case, it would have taken 34 hours for a single VM of size 150GB.

This left OOB IR as the only realistic way to transfer data. But at 300GB per VM, it is easy to exhaust a removable drive of 1TB. That left us thinking about deduplication – after all, deduplication is supported on the Replica site. So why not use it for deduplicating OOB IR data?

So I tested this out in my lab environment with a removable USB drive, and a bunch of VMs created out of the same Windows Server 2012 VHDX file. The expectation was that at least 20% to 40% of the data would be same in the VMs, and the overall deduplication rate would be quite high and we could fit a good number of VMs into the removable USB drive.

I started this experiment by attaching the removable drive to my server and attempted to enable deduplication on the associated volume in Server Manager.

Interesting discovery #1:  Deduplication is not allowed on volumes on removable disks

Whoops! This seems like a fundamental block to our scenario – how do you build deduplicated OOB IR, if the deduplication is not supported on removable media? This limitation is officially documented here: http://technet.microsoft.com/en-us/library/hh831700.aspx, and says “Volumes that are candidates for deduplication must conform to the following requirements:  Must be exposed to the operating system as non-removable drives. Remotely-mapped drives are not supported.”

Fortunately my colleague Paul Despe in the Windows Server Data Deduplication team came to the rescue. There is a (slightly) convoluted way to get the data on the removable drive and deduplicated. Here goes:

  • Create a dynamically expanding VHDX file. The size doesn’t matter too much as you can always start off with the default and expand if required.

image

  • Using Disk Management, bring the disk online, initialize it, create a single volume, and format it with NTFS. You should be able to see the new volume in your Explorer window. I used Y:\ as the drive letter.

image

  • Mount this VHDX on the server you are using to do the OOB IR process.
  • If you go to Server Manager and view this volume (Y:\), you will see that it is backed by a fixed disk.

image

  • In the volume view, enable deduplication on this volume by right-clicking and selecting ‘Configure Data Deduplication’. Set the ‘Deduplicate files older than (in days)’ field to zero.

image

image

You can also enable deduplication in PowerShell with the following commandlets:

PS C:\> Enable-DedupVolume Y: -UsageType HyperV
PS C:\> Set-DedupVolume Y: -MinimumFileAgeDays 0

Now you are set to start the OOB IR process and take advantage of the deduplicated volume. This is what I saw after 1 VM was enabled for replication with OOB IR:

image

image

That’s about 32.6GB of storage used. Wait… shouldn’t there be a reduction in size because of deduplication?

Interesting discovery #2:  Deduplication doesn’t work on-the-fly

Ah… so if you were expecting that the VHD data would arrive into the volume in deduplicated form, this is going to be a bit of a surprise. At the first go, the VHD data will be present in the volume in its original size. Deduplication happens as post-facto as a job that crunches the data and reduces the size of the VHD after it has been fully copied as a part of the OOB IR process. This is because deduplication needs an exclusive handle on the file in order to go about doing its work.

The good part is that you can trigger the job on-demand and start the deduplication as soon as the first VHD is copied. You can do that by using the PowerShell commandlet provided:

PS C:\> Start-DedupJob Y: -Type Optimization

There are other parameters provided by the commandlet that allow you to control the deduplication job. You can explore the various options in the TechNet documentation: http://technet.microsoft.com/en-us/library/hh848442.aspx.

This is what I got after the deduplication job completed:

image

That’s a 54% saving with just one VM – a very good start!

Deduplication rate with more virtual machines

After this I threw in a few more virtual machines with completely different applications installed and here is the observed savings after each step:

image

I think the excellent results speak for themselves! Smile Notice how between VM2 and VM3, almost all of the data (~9GB) has been absorbed by deduplication with an increase of only 300MB! As the deduplication team as published on TechNet, VDI VMs would have a high degree of similarity in their disks and would result in a much higher deduplication rate. A random mix of VMs yields surprisingly good results as well.

Final steps

Once you are done with the OOB IR and deduplication of your VMs, you need to do the following steps:

  1. Ensure that no deduplication job is running on the volume
  2. Eject the fixed disk – this should disconnect the VHD from the host
  3. Compact the VHD using the “Edit Virtual Hard Disk Wizard”. At the time I disconnected the VHD from the host, the size of the VHD was 36.38GB. After compacting it the size came down to 28.13GB… and this is more in line with the actual disk consumed that you see in the graph above
  4. Copy the VHD to the Replica site, mount it on the Replica host, and complete the OOB IR process!

 

Hope this blog post helps with setting up your own Hyper-V Replica sites from scratch using OOB IR! Try it out and let us know your feedback.

Azure Site Recovery adds InMage Scout to Its Portfolio for Any Virtual and Physical Workload Disaster Recovery

$
0
0

Azure Site Recovery with Hyper-V Replica already supports the ability to set up disaster recovery between your two Windows Server 2012+ Hyper-V data centers, or between your Windows Server 2012 R2 Hyper-V data center and Microsoft Azure. Recently we acquired InMage Systems Inc., an innovator in the emerging area of cloud-based business continuity. InMage offers migration and disaster recovery capabilities for heterogeneous IT environments with workloads running on any hypervisor (e.g. VMware) or even on physical servers. InMage’s flagship product Scout is now available as a limited period free trial from the Azure Site Recovery management portal.

To learn more, download and try out Azure Site Recovery with InMage Scout, please read my colleague Gaurav Daga’s blog Azure Site Recovery Now Offers Disaster Recovery for Any Physical or Virtualized IT Environment with InMage Scout.

You can find more details about Azure Site Recovery @ http://azure.microsoft.com/en-us/services/site-recovery/. If you have questions when using the product, post them @ http://social.msdn.microsoft.com/Forums/windowsazure/en-US/home?forum=hypervrecovmgr or in this blog. You can also share your feedback on your favorite features/gaps @ http://feedback.azure.com/forums/256299-site-recovery

ExpressRoute + ASR = Efficient DR solution

$
0
0

Microsoft recently announced the availability of Azure ExpressRoute which enabled our customers to create a private connection between their on-premises to Microsoft Azure. This ensured that the data to Azure used an alternate path to the internet where a connection to Azure could be established through an Exchange Provider or through a Network Service Provider. With ExpressRoute customers can connect in a private peering setup to both Azure Public Cloud Services as well as Private Virtual Networks.

This opened up a new set of scenarios which otherwise was gated on the network infrastructure (or lack of it) – key among them were continuous, replication scenarios such as Azure Site Recovery. At scale, when replicating 10’s-100’s of VMs to Azure using Azure Site Recovery (ASR), you can quickly send TBs of data over ExpressRoute.

You can find tons of documentation on ExpressRoute and it’s capabilities @ https://azure.microsoft.com/en-us/services/expressroute/ and TechEd talks in Channel9 @ http://channel9.msdn.com/Search?term=expressroute#ch9Search. ExpressRoute truly extends your datacenter to Azure and organizations can view Azure as “yet-another-branch-office”.

ASR was truly excited with this announcement which couldn’t have come at a better time. Microsoft’s internal IT (MSIT) and Network Infrastructure Services (NIS), being the very first adopters of ExpressRoute has rolled out ExpressRoute as a Network Service, which enables true hybrid cloud experience for all internal customers.

My partners in MSIT (Arvind Rao & Vik Chhabra) helped me get ExpressRoute connected setup and I got a chance to play around with ASR from one of the Microsoft buildings at Puget Sound. The setup which was loaned to me by MSIT looks similar to this (except that MSIT owns both the infrastructure and the network):

image

At a high level, ASR replicates VM (initial and subsequent changes to the VM) to your storage account directly. The “replication” traffic is sent over the green line to “Azure public resources” such as Azure blob store. Once the VMs are failed over, we create IaaS VMs in Azure using the replicated data. Any traffic back to the corporate network (CorpNet) or from CorpNet to the IaaS VM goes over the red line in the above picture.

The results were fabulous to say the least! High throughput was observed during initial and delta replication. Once the VMs were failed over, the traffic to our internal CorpNet and high throughput was observed for that as well. The key takeaway: Once ER was setup, ASR just worked. There was no extra configuration which was required from ASR’s perspective.

How high is “high throughput” - in a setup, where I had 3 replicating VMs, the below picture captures the network throughput when initial replication was in progress:  

A whooping 1.5Gbps network upload speed to Azure – go ExpressRoute, go!

ASR combined with ExpressRoute provides a powerful, compelling, efficient disaster recovery scenario to Microsoft Azure. ExpressRoute removes traditional blockers in networking when sending massive amounts of data to Azure – disaster recovery being one such scenario. And ASR removes traditional blockers of providing an easy, cost effective DR solution to a public cloud infrastructure such as Microsoft Azure.

You can find more details on ASR @ http://azure.microsoft.com/en-us/services/site-recovery/. The documentation explaining the end to end workflows is available @ http://azure.microsoft.com/en-us/documentation/articles/hyper-v-recovery-manager-azure/. And if you have questions when using the product, post them @ http://social.msdn.microsoft.com/Forums/windowsazure/en-US/home?forum=hypervrecovmgr or in this blog.

You can also share your feedback on your favorite features/gaps @ http://feedback.azure.com/forums/256299-site-recovery. As always, we love to hear from you!

Migrate Windows Server 2012R2 Virtualized Workloads to Microsoft Azure with Azure Site Recovery

$
0
0

Azure Site Recovery not only enables a low-cost, high-capability, CAPEX-saving, and OPEX-optimizing Disaster Recovery strategy for your IT Infrastructure, it can also help you quickly and easily spin-off additional development and testing environments or migrate on-premise virtual machines to Microsoft Azure. For customers who want a unified solution that reduces downtime of their production workloads during migration and that also enables verification of their applications in Azure without any impact to production, Azure Site Recovery with its in-built features makes migrating to Azure simple, reliable, and quick. The flowchart below describes a typical migration flow using Azure Site Recovery. For more information, please visit the following detailed blog on how to migrate on-premise virtualized workloads to Azure using Azure Site Recovery.

Flowchart describing workflow for migrating on-premise virtualized workloads to Azure using Azure Site Recovery.


Share your feedback!

$
0
0

Are you a System Administrator or Security Analyst? Would you like to influence the future of securing your virtualized infrastructure? If this sounds interesting to you, Microsoft Windows Server and cloud management Program Managers would like to hear from you.

Please complete this short survey that will help identify your specific areas of interest and expertise to make sure our discussions fit your interests.

Azure Site Recovery: Data Security and Privacy

$
0
0

Microsoft is committed to ensuring the Privacy and Security of our customer’s data whenever it crosses on-premises boundaries and into Microsoft Azure. Azure Site Recovery, a cloud-based Disaster Recovery Service that enables protection and orchestrated recovery of your virtualized workloads across on-premises private clouds or directly into Azure, has been designed ground up to align with Microsoft’s privacy and security commitment.

Specifically our promise is to ensure that:

  • We encrypt customer data while in transit and at rest
  • We use best-in-class industry cryptography to protect all channels, including Perfect Forward Secrecy and 2048-bit key lengths

To read more about how the Azure Site Recovery architecture delivers on these key goals, check out our new blog post, Azure Site Recovery: Our Commitment to Keeping Your Data Secure on the Microsoft Azure blog.

Networking 101 for Disaster Recovery to Microsoft Azure using Site Recovery

$
0
0

Getting the networking requirements right is a critical piece in ensuring Disaster Recovery readiness of your business critical workloads.  When an administrator evaluates the disaster recovery capabilities that she needs for her application(s), she needs to think through and develop a robust networking infrastructure that ensures that the application is accessible to end users once it has been failed over and that the application downtime is minimized – RTO optimization is of key importance.


Head over to the Microsoft Azure blog to read our new blog that shows you how you can accomplish Networking Infrastructure Setup for Microsoft Azure as a Disaster Recovery Site. Using the example of a multi-tier application we show you how to setup the required networking infrastructure, establish network connectivity between on-premises and Azure and then conclude the post with more details on Test Failover and Planned Failover.

Announcing the GA of Disaster Recovery to Azure using Azure Site Recovery

$
0
0

I am excited to announce the GA of the Disaster Recovery to Azure using Azure Site Recovery. In addition to enabling replication to and recovery in Microsoft Azure, ASR enables automated protection of VMs, remote health monitoring, no-impact recovery plan testing, and single click orchestrated recovery - all backed by an enterprise-grade SLA.

The DR to Azure functionality in ASR builds on top of System Center Virtual Machine Manager, Windows Server Hyper-V Replica, and Microsoft Azure to ensure that our customers can leverage existing IT investments while still helping them optimize precious CAPEX and OPEX spent in building and managing secondary datacenter sites.

The GA release also brings significant additions to the already expansive list of ASR’s DR to Azure features:

  • NEW ASR Recovery Plans and Azure Automation integrate to offer robust and simplified one-click orchestration of your DR plans
  • NEW Track Initial Replication Progress as virtual machine data gets replicated to a customer-owned and managed geo-redundant Azure Storage account. This new feature is also available when configuring DR between on-premises private clouds across enterprise sites
  • NEW Simplified Setup and Registration streamlines the DR setup by removing the complexity of generating certificates and integrity keys needed to register your on-premises System Center Virtual Machine Manager server with your Site Recovery vault

Hyper-V integration components are available through Windows Update

$
0
0

Starting in Windows Technical Preview, Hyper-V integration components will be delivered directly to virtual machines using Windows Update.

Integration components (also called integration services) are the set of synthetic drivers which allow a virtual machine to communicate with the host operating system.  They control services ranging from time sync to guest file copy.  We've been talking to customers about integration component installation and update over the past year to discover that they are a huge pain point during the upgrade process. 

Historically, all new versions of Hyper-V came with new integration components. Upgrading the Hyper-V host required upgrading the integration components in the virtual machines as well.  The new integration components were included with the Hyper-V host then they were installed in the virtual machines using vmguest.iso.  This process required restarting the virtual machine and couldn't be batched with other Windows updates.  Since the Hyper-V administrator had to offer vmguest.iso and the virtual machine administrator had to install them, integration component upgrade required the Hyper-V administrator have administrator credentials in the virtual machines -- which isn't always the case.    

In Windows Technical Preview, all of that hassle goes away.  From now on, all integration components will be delivered to virtual machined through Windows Update along with other important updates. 

For the first time, Hyper-V integration components (integration services) are available through Windows Update for virtual machines running on Windows Technical Preview hosts.

There are updates available today as KB3004908 for virtual machines running:

  • Windows Server 2012
  • Windows Server 2008 R2
  • Windows 8
  • Windows 7

The virtual machine must be connected to Windows Update or a WSUS server.  In the future, integration component updates will have a category ID, for this release, they are listed as Important KB3004908.

Again, these updates will only be available to virtual machines running on Windows Technical Preview hosts.

Enjoy!
Sarah

Viewing all 220 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>