The key to quick data recovery in a disaster is data replication. While the gold standard continues to be synchronous replication, the available alternatives have become quite extensive, opening
Enterprise arrays tend to offer the broadest range of options, including synchronous and asynchronous replication, as well as potentially critical features like consistency groups and various types of multi-hop replication. These systems also support a growing range of disk types, from solid state drives (SSDs) to Fibre Channel to Serial ATA (SATA). So it's conceivable that a multi-tier disaster recovery (DR) solution can be configured within a single storage platform.
More commonly, organizations look to midrange storage platforms for lower cost replication options. The challenge here is that the replication capabilities of these systems can vary significantly, from vendors that offer essentially the same or nearly the same functionality as on their tier 1 systems to those that offer only basic replication.
In environments with more limited needs, such as a handful of key applications requiring replication, host- or application-based replication can often be leveraged at an even lower price. These approaches, which include database log shipping, volume- or file-based replication, and application-specific replication tools, can provide a very cost-effective way to meet requirements, as long as the number of applications and servers remains manageable. However, configuring and monitoring a large number of such systems can increase complexity. Also, a range of software solutions may be required to support different applications. At that point, people look to a broader option, typically at the storage level.
Reducing disaster recovery costs
Server virtualization promises disaster recovery many benefits and is causing organizations to rethink their overall disaster recovery approach. One of the most significant cost-related impacts relates to the problem of idle systems. Not only can virtualization dramatically reduce the number of physical servers required for a disaster recovery site, but, due to the very nature of virtualization, these servers may actually be leveraged for daily operations. For example, DR systems may be deployed under normal circumstances for test and quality assurance, but they take on the alternate or additional role of hosting production virtual machines that failed over from the primary site in a DR scenario. When trying to justify disaster recovery costs, non-idle, multifunction assets can be the difference between a winning and losing business case.
Given our fascination with virtualization, it's only reasonable to look there for potentially more affordable storage options. The promise of virtualization at the SAN fabric level has yet to be realized on a large scale. Replication is one service capability offered by such technology, and it can be done heterogeneously between different types of arrays. This opens the possibility of replicating to a less expensive array.
Beyond replication, disk-based backup can also play an important role in a multi-tiered disaster recovery strategy. Virtual tape libraries with deduplication and replication capabilities can provide a level of service below primary storage replication but higher than tape-based recovery. Because data is deduplicated, the bandwidth requirements are usually less. And by replicating to a similar platform at the DR site, recovery through restore can be significantly quicker than tape.
A recent global disaster recovery study, "2008 Symantec Disaster Recovery Research," by Symantec Corp. found that 56% of applications are now classified as mission critical, which is up from 36% in 2007. This has serious implications for the IT infrastructure, indicating a continuing rise in business demands and expectations. Meeting these demands in a time of increased budget constraints requires the careful application of the appropriate technologies to satisfy requirements without over-delivering. Understanding not only the potential benefits, but the operational implications for these additional options is, of course, essential before heading down a given path. But it's clear that meeting current and future demands will require such a multifaceted approach.
This article was previously published in Storage magazine.
About this author: Jim Damoulakis is CTO at GlassHouse Technologies, a leading independent provider of storage and infrastructure services. He can be reached at firstname.lastname@example.org.
This was first published in June 2009