By Todd Erickson
If you are considering or starting to implement virtual servers into your IT environment, you should be aware of the differences between virtualization and physical-server
"In many ways, implementing a disaster recovery plan for a virtual environment is the same as a typical physical-server environment. The general approach is not that much different from what you would consider for a physical environment," said Richard Jones, vice president and service director for data center strategies for the Burton Group. You want to back up your data, send it to an offsite location and be able to retrieve it as quickly as possible.
The differences between virtualization disaster recovery and physical-server disaster recovery revolve around backing up and recovering virtual machines (VMs) as opposed to physical servers. While many well-known software vendors offer backup solutions, some technologies can degrade virtualization efficiencies.
In this tutorial on virtualization disaster recovery, learn about virtual server backup and recovery today, VMware vs. Hyper-V, virtual server disaster recovery, and more.
DISASTER RECOVERY VIRTUALIZATION TUTORIAL TABLE OF CONTENTS
There are a number of accepted methods for backing up virtual environments. "What we are seeing today is really a mix of technologies with most customers," said Jeff Boles, a senior analyst with the Taneja Group. "They are still grasping at straws trying to figure out how to solve the VMware backup problem."
There are generally three virtual backup methods today: agent-based backup, image-based backup and serverless backup.
The most common virtual server backup method today, and the most mature technology, is agent-based backup, where you install a backup agent
into the VM guest OS. "The vast majority of customers still use agent-based [backup] today," Boles said. The agent is aware that it's in a virtual environment and knows to ignore swap files and temporary files when backing up data. Agent-based solutions can restore individual files, full images, or entire VMs. Well-known backup applications that use agent-based backup include CA Inc.'s ARCserve, CommVault Systems Inc.'s Simpana and Symantec Corp.'s NetBackup and Backup Exec, .
There are disadvantages to agent-based backup, most notably when you run multiple VMs on physical servers with limited I/O and bandwidth resources. Problems occur if you schedule all your VM backups for one window and the server I/O and bandwidth simply can't handle the load. "Most people that run across this problem the first time they virtualize," Jones said. "They learn their lesson that first time, they make the configuration changes, and they don't repeat it." Jones said a number of administrators stagger VM backups and spread the I/O and bandwidth load over time. The implementation of 10 Gigabit Ethernet could alleviate the bandwidth issues.
While agent-based backup operates within the guest OS, image-based backup operates at the virtualization layer and backs up entire virtual disks. "Image-based backup is generally a single point-in-time entire image of the operating system volume and all the data associated with it," said Jeff Boles. "Something like a snapshot or a block-based backup." Since image-level backup applications use snapshots, all of the VM data, including deleted files and empty disk blocks, are backed up. As a result, image-level backup application vendors are incorporating data reduction technologies, such as de-duplication and synthetic backups to reduce the amount of data stored. New technologies and products are alleviating other image-based-backup issues including file-level restores and incremental backups.
Vizioncore Inc., a subsidiary of Quest Software Inc., uses Active Block Mapping (ABM) in its vRanger Pro backup product to eliminate inactive blocks and Changed Block Tracking (CBT) to retrieve only changed blocks. PHD Virtual Technologies includes CBT in its PHD Virtual Backup for VMware ESX (formerly named esXpress), as well as source-side global deduplication. Veeam Software's Backup & Replication utilizes inline block-level deduplication and compression to reduce backup data.
Serverless backup (sometimes called LAN-free backup or proxy based backup) requires a storage area network (SAN) with Fibre Channel, iSCSI, or Fibre Channel over Ethernet (FCoE) connectivity. The VM momentarily pauses active applications and creates a snapshot, which it moves to a dedicated server on the SAN. The server spins up a VM to stream the snapshot to the end backup target. "It's called serverless backup is because the server that actually owns the data never has to be in the path of the backup or restore," Jones said. Most major backup application vendors offer serverless backup. Support for non-Windows OSes is limited but increasing. The downside to serverless backup is possible SAN bottlenecks. Successfully getting the snapshot from the VM to the SAN dedicated server, then streamed to the backup target depends on adequate SAN I/O and bandwidth resources.
Integration with VMware
Until vSphere 4 was released in May 2009, VMware offered a set of utilities collectively known as VMware Consolidated Backup (VCB), which accomplished serverless backup. With vSphere 4, VMware dumped VCB and now offers a set of application programming interfaces (APIs) called vStorage APIs for Data Protection and offloads backup processing from the VMware server.
Most major backup application vendors support VMware's vStorage APIs, as do the independent vendors that focus on virtual environments. The APIs allow the backup applications insight into the VM to know where the guest OS is located, where its storage is located, and how to initiate snapshots. This insight is especially important if you are working with more than one physical host and using vMotion within a VM cluster, Jones said. One issue you may run into with the smaller companies' products is that they may only support VMware, and not Microsoft Corp.'s Hyper-V or Citrix System Inc.'s XenServer virtualization technologies.
VMware dominates the virtual-environment market, but Microsoft's Hyper-V is gaining traction. If you have a Hyper-V environment, you will see some technology differences in how you protect your data, but the strategy remains the same. "As far as the general high-level strategy for backing up [in either environment] there's really not much of a difference at all," Jones said. You will need to understand how Microsoft's Volume Shadow Copy Service (VSS) allows applications to talk to Hyper-V's integration services. The backup application vendors that support both Hyper-V will make its changes easy to deal with.
A main difference between VMware, Hyper-V and XenServer is how storage resources are brought into the VM, Jones said. VMware abstracts the storage while Hyper-V and XenServer do not. Hyper-V and XenServer establish more of a direct connection between the VMs and data storage, while VMware requires storage resources to pass through the Virtual Machine Disk (VMDK) environment.
Initially, VMware required users to restore an entire VMDK image to recover application data. This process was slow and a pain point for users so the company developed its Raw Device Mapping technology. "[Raw Device Mapping] allows you to short-circuit some of the VMware architecture to bring a disk directly in and allow the guest OS to talk to it in a raw format," Jones said. In fact, he recommends using Raw Device Mapping to his clients because you don't have to put a VMDK formatted file on the disk to gain access to guest OS. Serverless backup applications might require Raw Device Mapping technology, and you will absolutely want to use it for remote-site replication so you can split the application data, VM system files, and swap and temporary files into separate streams. Replicating swap files and temporary files could chew up a lot of bandwidth for no good reason.
Remote replication for DR protection is another issue administrators of virtual environments will have to contend with as not all backup application vendors offer replication solutions for virtual servers, and according to Jones, none of them do it particularly well. "I haven't seen any that have been really well integrated at this point," Jones said.
Virtual environment replication has spawned a spin-off industry. "There's a different class of vendors out there that are focusing on replication," Jones said. These vendors are approaching replication from the application level, as opposed to attaching to the VM. Using application APIs, vendors such as Double-Take Software Inc. (recently acquired by Vision Solutions Inc.) and Neverfail Ltd. hook into the applications running on VMs so they can not only protect the data, but understand the metadata and context of the application itself. The replication solutions can understand the difference between a Microsoft Exchange individual email, mailbox and post office. They can also do granular restores, such as an individual email message, and replicate only changes instead of entire data sets.
Virtualization offers some benefits over physical server environments when it comes to recovery, but there are still a number of factors to consider when planning and setting up your remote site. If you have a VMware environment, get to know vCenter Site Recovery Manager (SRM). "If you are a VM customer today, SRM is pretty much the standard," Boles said." It's just there, and you'd be hard pressed not to use something that's integrated into your vCenter virtual infrastructure." SRM sits on top of your storage array based replication solution and manages the relationships between the VMs at the primary site and remote site.
"SRM has some plug-in capabilities so that it understands array-based replication and can orchestrate the entire virtual infrastructure on top of a replicating storage pool," Boles said. "That's a great coup for the typical DR manager because it hasn't always been easy to understand the dependencies between things, and SRM goes quite a ways in structuring relationships between different VMs so you can make sure they are always managed and recovered together at the remote site."
Site Recovery Manager requires synchronous replication to the remote site. There are third-party tools that will allow you to do asynchronous replication with a smaller pipe including InMage Systems Inc.'s application and data recovery platform, and 3PAR Inc.'s (which recently signed an agreement to be acquired by Dell Corp.) Remote Copy software.
Boles said there are a number of factors to consider when planning your virtualization disaster recovery strategy. You need to have the right replication features in your storage array, and a big pipe to the remote site for synchronous replication. Make sure you understand your hardware and software needs at the remote site, including having Site Recovery Manager and the appropriate virtualization licensing in place. "It's challenging for customers to figure out exactly what they need at the secondary site," Boles said, "and to right-size it so that it's not more than what they need and they are not paying for needless infrastructure over there."
You need to understand your environment dependencies so you have all of the necessary applications under SRM for DR protection. If some applications require multiple servers, such as Web servers, application servers and database servers, make sure they are all protected.
For Hyper-V environments, Marathon Technologies Corp.'s everRun SplitSite maintains application high availability over switched WAN sites. With Microsoft Cluster Service (MSCS), Hyper-V's Enterprise and Data Center editions can also be deployed as a high-availability cluster environment with geographically separated nodes. Citrix XenServer environments can use Marathon Technologies' everRun VM for Citrix XenServer for high-availability redundant systems and proactive monitoring.
Finally, disaster recovery has to be a "living process" in your organization. Don't just set up your remote site and forget about it. When your virtual environment changes, make sure the disaster recovery site is in alignment. And test it.
This was first published in August 2010