Enterprise Strategy Group Senior Analyst David Chapa reviewed disaster recovery trends, components of DR plans and the benefits of replication technology for DR in his recent report, "Replication Technologies for Business Continuity and Disaster Recovery." In this podcast, Chapa discusses some of the results of that study with SearchDisasterRecovery.com Editor Andrew Burton. Listen to the podcast or read the transcript below.
Download for later:
- Internet Explorer: Right Click > Save Target As
- Firefox: Right Click > Save Link As
In your report, you note that improving DR is a key investment area for businesses that you surveyed, and replication will be playing a key role in the future. Replication technology has been around for a long time, what’s driving this interest in replication?
The interest in disaster recovery has always been top-of-mind for customers, but I believe that using replication technology to support business continuity and disaster recovery is becoming more of a trend simply because the cost of the links has come down considerably, and the technology has advanced considerably over the years as well. You can do a lot with asynchronous replication that you couldn’t do before. So I really think that it’s really opening the doors quite a bit, and certainly, one of the driving factors behind this is virtualization. You hear everyone say, “virtualization changes everything.” And the reality is, it really does. Customers that are developing a virtualized infrastructure really have to think how they’re going to execute their overall data protection strategy.
Your report discusses replication technology available today -- such as CDP versus snapshot, synchronous versus asynchronous, host-based versus array-based -- but what technologies are you seeing deployed right now?
I think that all of those technologies are being used across the board. When you think of synchronous and asynchronous replication, you really divide those up into synchronous being the ones that are used for the high-availability solutions, where you need to have the transaction at the primary and at the secondary near-simultaneous. And then you have asynchronous replication, which the majority of the solutions are leveraging to support DR functions.
Now, we look at some of the other technologies that I mention in the report, like CDP and snapshot, whether it’s host-based or array-based. CDP has been around for awhile, just as snapshot has. And I think what we see in the industry is that CDP as a standalone product has been finding it difficult to gain adoption. But CDP integrated into mainstream solutions has been gaining a lot more use, and a lot more deployment in customers’ environments. I’m seeing that quite a bit, as well.
And then, host-based versus array-based, this is a simple one to talk about. For those customers that have like-like vendors, so the array-based replication from Vendor A to Vendor A, that makes a lot of sense because you can deploy [for example] your primary second-tier storage and you have the same storage array in a destination, you can replicate that between the arrays.
In some cases, it’s included in the licensing costs of the array. But for those customers who are managing multiple storage tiers, they’re looking at other ways to reduce the complexities and the management of the storage and replication for DR. And so they’ll be looking at some of the host-based type of replication solutions to manage that for disaster recovery purposes.
Do you expect CDP to grow in popularity? I've always thought of it as a niche play.
It is a bit of a niche play, but as you look at some of the vendors that are integrating the CDP technology into their core products, now customers are being provided more of an opportunity to select the granularity of their replication. Do you want snapshot, and does that meet your business needs? Then snapshots it is, and you’ll use the snapshot technology. But if your business requirement is that you need more granular recovery, down to the second, then perhaps the CDP approach or model is suited to that particular customer’s environment.
By taking those technologies and putting them into more mainstream solutions, it provides customers a lot more choice.
Would you outline some of the findings in your report on data classification?
Data classification could be a whole market landscape report in and of itself. I didn’t go in too deep into the data classification component, [but] only to say that it is very important for planning their data protection strategy, whether it’s for backup recovery, or in these cases, disaster recovery [and] business continuity. There comes a time when customers understand the value of the data. And you understand the value of the data by classifying that data, and at the highest level, I have always talked about it as “mission critical,” “critical” and “deferred.”
At ESG, we refer to it as Tier 1, Tier 2 and Tier 3 data. So data classification is very important. Not all data should be replicated, just as not all data should be backed up. Not all data needs to be retained for seven years, but unless you go through the process and the effort of understanding your data, the classification and the characteristics, you’re not going to be able to make those choices. As customers are transitioning from physical to virtual… virtualization does change things. In this case, change is a very positive thing for customers if they can embrace it the right way. So as they go through and they transition from physical to virtual, they now have an opportunity to really go through this daunting task in identifying the data that is most critical and then setting up those replication policies for disaster recovery based on the proper tiered levels of their data.
Certainly this is something that we need to continue to talk about; I can guarantee this will be a topic that I will be writing about, as well as others from ESG.
This was first published in September 2011