Skip to main content

Hyper-V - Failover Cluster, Quick Migration and Live Migration

If you are like me and have mainly worked in small businesses then you may not have had the chance to setup a failover clustering environment. We've read all about it and how wonderful it is to have it, but unfortunately a lot of the times, the money is spent on more important things like staff salary and trying to keep the business afloat.

Well, I am happy to say with the client I am working for right now, they have a very good failover clustering Hyper-V environment running on top of Windows Server 2012 R2  Datacentre attached to an EMC VNX 5300 SAN. The guy who set it up unfortunately no longer works there.

To cut a long story short, I needed to move a VM to another node (another host in the cluster).

With failover clustering being used for Hyper-V you need to run the Failover Cluster Manager to manage the VMs. You don't use the Hyper-V Manager, even though you can start it up and see the virtual machines. You can change some settings with Hyper-V Manager, but the recommendation from Microsoft is to use the Failover Cluster Manager if you are running a clustered environment.

Once you start up the Failover Cluster Manager, click on Roles on the left pane, and then you will see all the virtual machines being listed in the middle pane.

Hyper-V (version 2008 and above), allows you to do a Quick Migration. A Quick Migration will save the memory and virtual machine state to disk.  The destination VM on the destination node reads the state and then starts up the VM. When you are using Cluster Shared Volumes in the nodes there is no need to copy the virtual machine configuration and virtual hard disks. This method is suitable for planned maintenance and there is a downtime involved while the machine state is saved and read in the destination node.

In version 2008R2, Live Migration was introduced. Live Migration does not save the machine state, but rather it copies the VM's memory to the destination VM on the other node. It will keep track of any changes in the source VM's memory during the copy, and then does a copy of those changes and starts up the new VM on the destination server. When you are using Cluster Shared Volumes in the nodes, there will be no need to copy the virtual machine configuration and virtual hard disks. This method is also suitable for planned maintenance and the users won't even notice that a migration took place as it's very seamless.

Live Migration involves the system checking to make sure that it can proceed with the Live Migration i.e. network connectivity, compatible host hardware, enough RAM etc.. on the destination node before it proceeds. If there are going to be issues Live Migration will not proceed.

For Hyper-V versions 2008 and 2008R2, it is required that the servers are setup in a failover cluster for Quick Migration and Live Migration to work.

In Hyper-V 2012 and 2012R2, it is not required that the servers are in a failover cluster. Windows 2012 supports SMB file share for storage of virtual machines. Users no longer have to invest in expensive SANs, but rather they can invest in some sort of central storage which supports SMB file shares.

That's the basics of Quick Migration and Live Migration.

Do read up more on this topic on the Microsoft Technet site.

I used Live Migration to move my VM. Also, in Windows 2012 there is an option to move the VM configuration and virtual hard disks to another volume. This is done HOT without downtime. It involves the configuration and virtual hard disks being copied first to the destination volume before the VM seamlessly switch over to the new volume automatically. I had to do this because the original volume was starting to run low on space.

-Sirisoft Blogger


Comments

Popular posts from this blog

How to Schedule an Exchange PowerShell Script in Task Scheduler

Exchange Management Shell
Since Exchange 2007, Microsoft has provided the Exchange Management Shell so administrators can manage all aspects of the Exchange server from the command line.



The Exchange Management Shell has Exchange specific PowerShell cmdlets. These Exchange cmdlets are not normally available in an ordinary PowerShell command environment.



An example of what can be done in the Exchange Management Shell is to run a PowerShell script to list all the mailboxes on the Exchange server to a file. You can output columns based on display name, size of the mailbox, last logon, and other available mailbox attributes.

You can also schedule a batch migration of mailboxes from one database to another such as the migration of mailboxes from Exchange 2010 to Exchange 2013.

Scheduling the PowerShell Script

Once you have written a PowerShell script and utilised the Exchange cmdlets, you can run it with no problems inside the Exchange Management Shell. If you were to try to run it under a …

How To Migrate Mailboxes from Exchange 2010 to Exchange 2016 using PowerShell

The Scenario

Your organisation have decided to migrate from Exchange 2010 to Exchange 2016. The Exchange 2016 server have been installed into your current Exchange Organization. The Mailbox role have been installed on the Exchange 2016 server and you are ready to start moving mailboxes from the Exchange 2010 server to the Exchange 2016 server.

Migrating a Mailbox from Exchange 2010 to Exchange 2016

Using New-MoveRequest

Migrating a single mailbox involves invoking the cmdlet New-MoveRequest from the Exchange Management Shell on the Exchange 2016 server. Make sure that your user account that you have logged into the server with have the Organization Management role.

The common parameters that I use for the New-MoveRequest cmdlet is :

New-MoveRequest -Identity 'useralias@somedomain.com' -TargetDatabase "DB02" -BadItemLimit 10

The -Identity parameter identifies the mailbox to be migrated. I usually use the e-mail address of the mailbox for the identity since it's uniqu…

Elastic Load Balancing in AWS

Elastic Load Balancing is a service which allows for the automatic distribution of incoming traffic across multiple Amazon EC2 instances.These EC2 instances should be in separate availability zones in a particular region.This enables applications to achieve fault tolerance and high availability if they are designed so that they can be accessed from multiple server instances. Sometimes an application may not need to be designed as such if they both point to the same data source. More often than not, these applications can be run from anywhere. They are good candidates to be put behind the Elastic Load Balancing service.It is up to the system or cloud administrator to ensure that identical versions of the application exist across all servers that are going to be load balanced.

The Elastic Load Balancing service can be integrated with Auto Scaling in AWS. As more load is put on your application servers, additional EC2 instances can be launched by Auto Scaling.Once the load dissipates. EC…