kentoh - Fotolia
One of the major benefits of virtual disaster recovery is its speed and high availability. Unfortunately, bandwidth constraints can affect these data transfer speeds and may become a virtual DR plan's Achilles' heel.
In virtual disaster recovery, one or more virtual machines (VMs) fail over to the cloud or to a remote data center. The VMs are able to remain online and functional during a disaster, even in a large-scale incident. While slow data transfer speeds don't necessarily mean that a VM disaster recovery plan is doomed to fail, there are several ways in which they can completely undermine an organization's disaster recovery initiatives.
One of the most obvious ways in which low transfer speeds can impact an organization's disaster recovery initiatives is in the replication process. In order for VM disaster recovery to work properly, a VM's contents must be replicated to the remote site. Once the initial replication process completes, the VM must be kept in sync with its replica. If a storage block on the VM is modified, then that modified storage block must be copied to the off-site replica VM.
This ongoing synchronization process usually works well. However, if the VM's contents change frequently, it can be a problem. If a VM consistently has a change rate that exceeds the transfer rate, then the replication engine will be unable to keep pace with the changes, and replication will eventually fail. This tends to happen if an organization tries to maintain a synchronized replica of a high-performance database server.
Another way in which data transfer speeds can affect VM disaster recovery efforts is in a situation in which only a partial failover occurs.
DR planning is often designed as a contingency against a data center-level failure, with all of an organization's mission-critical workloads failing over to the remote site. However, a less severe disaster could conceivably cause some workloads to fail over to the remote site, while other workloads continue to run in their original location.
With this in mind, imagine a situation in which an application server fails over to a remote site, but the database server that the application depends on continues to run in its original location. In that situation, all database transactions would have to pass across a WAN link. Depending on the transaction rate, the application could become much slower, or it may stop working altogether. The key to avoiding this type of situation is to create failover groups of resources that should always fail over together.
5 common VM backup mistakes
Dig Deeper on Disaster recovery planning - management
Related Q&A from Brien Posey
Decentralized storage technology can be confusing and complicated. These best practices, however, can help with implementation in enterprise IT ... Continue Reading
Organizational resilience encompasses everything a company needs to run in times of crisis. These examples show how businesses handle tough ... Continue Reading
There are several different methods to cloud storage encryption. These best practices for encryption can help improve security of important cloud ... Continue Reading