After multiple attempts, I have come to the conclusion that directly moving a Virtual Machine from a Hyper-v 2012 stand-alone host to a Windows Server 2012 R2 cluster is not exactly a feasible option right now.
I’ve been able to achieve this by a two step process. a) Move the target VM to a ‘staging’ Hyper-v 2012 R2 stand-alone host after configuring both source and destination hosts for Live Migration using Kerberos authentication and then b) successfully move the VM to the Windows Server 2012 R2 Failover Cluster c) Add the new VM Role to the cluster.
At this point, I can only attribute this to a compatibility issue between WS2012 and WS2012R2.
Configure Live Migration on both Source and Destination Hyper-v Hosts:
The source host is running Hyper-v 2012, while the destination host is running Hyper-v 2012 R2.Live migration settings on both source and destination machines are the same.
Same thing is achieved using Powershell:
Turn Off NetfirewallProfiles on the Source and Target Hosts :
Since both hosts are internal to my lab domain, I will disable the NetfirewallProfiles in the next screenshot using PowerShell instead of creating a rule for live migration :
Configure Kerberos Constrained Delegation :
Open Active Directory Users and Computers, browse to the source host, right click and select properties. Select the Delegation tab.Select the ‘Trust this computer for delegation to specified services only’ option and select ‘Use Kerberos Only’. Add the remote host for delegation. Do the same thing on the target host as indicated in the screenshot:
I will write about Kerberos Constrained Delegation in a future post.
Move VM :
PS C:\> Move-VM -ComputerName labtarget -Name VM01 -DestinationHost hyperv02 -IncludeStorage -DestinationStoragePath c:\vms -Verbose
PS C:\> exit
The Virtual Machine move to the ‘staging’ target host completed successfully.
Move the Virtual Machine from the ‘staging’ Hyper-v 2012 R2 stand-alone Host to a Windows Server 2012 R2 Cluster Node:
At this time, I’ll move the VM to one of the Failover Cluster Nodes. We’ll make the same configuration changes on the target node as above. It is important to verify the
1) Live Migration settings on the source and target hosts, to confirm they’re the same.
2) Verify that the NetfirewallProfiles settings are the same on both hosts.
3) Configure and verify Kerberos Constrained Delegation for both hosts also.
The first time I attempted to initiate the VM move, the task failed as shown below:
PS C:\> Move-VM -ComputerName hyperv02 -Name VM01 -DestinationHost chost02 -IncludeStorage -DestinationStoragePath c:\vms -Verbose
VERBOSE: Move-VM will move the virtual machine "VM01" to host "chost02"
Move-VM : Virtual machine migration operation for 'VM01' failed at migration source 'HYPERV02'. (Virtual machine ID AB0A3404-E5A0-4950-8D17-3360A0AEB140)
The Virtual Machine Management Service failed to establish a connection for a Virtual Machine migration with host 'chost02': No credentials are available in the security package (0x8009030E).
Failed to authenticate the connection at the source host: no suitable credentials available.
At line:1 char:1
+ Move-VM -ComputerName hyperv02 -Name VM01 -DestinationHost chost02 -IncludeStorage ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (Microsoft.Hyper...VMMigrationTask:VMMigrationTask) [Move-VM], VirtualizationOperationFailedException
+ FullyQualifiedErrorId : OperationFailed,Microsoft.HyperV.PowerShell.Commands.MoveVMCommand
It occurred to me that my Cluster has several networks setup for different roles/functions. The Cluster Migration is set to run off it’s own separate network. I temporarily changed the Cluster Live Migration to run off the domain LAN. I verified the Live Migration settings on source and destination hosts again to make sure they’re on the same network, I checked the Kerberos authentication settings again and then restarted both source and destination hosts.
PS C:\> Move-VM -ComputerName hyperv02 -Name VM01 -DestinationHost chost02 -IncludeStorage -DestinationStoragePath c:\vms -Verbose
VERBOSE: Move-VM will move the virtual machine "VM01" to host "chost02"
The VM move to the cluster node was successful this time.
Add the Virtual Machine Role to the Cluster:
Now we’ve successfully moved the VM to a cluster node, we’ll add it as a role to the Failover using the following .
PS C:\> Add-ClusterVirtualMachineRole -VMName vm01 -Cluster LabCluster00 -Verbose
Report file location: C:\Windows\cluster\Reports\Highly Available Virtual Machine fad92477-8a9e-460a-bcac-780e4bb61fb1 on 2014.02.15 At 20.11.11.mht
Name OwnerNode State
---- --------- -----
VM01 CHOST02 Online
The report file basically reminds us that the new virtual machine role’s storage is not yet highly available. So, finally, we move the VM storage into the cluster shared volume:
PS C:\> Move-VMStorage -ComputerName chost02 -VMName vm01 -DestinationStoragePath c:\ClusterStorage\Volume1
The move completes successfully. We can verify that the VM is a member of the cluster using the following PowerShell cmdlet:
PS C:\> Get-ClusterGroup -Cluster labcluster
Name OwnerNode State
---- --------- -----
Available Storage CHOST01 Offline
Cluster Group CHOST01 Online
VM01 CHOST02 Online
I hope this helps point someone in the right direction if they’re having problems migrating VMs in a Cluster scenario.