SearchDisasterRecovery.com recently spoke with Pierre Dorion, senior consultant with Long View Systems Inc., about the basics of disaster recovery storage, particularly whether it’s better to outsource disaster recovery, or keep it in-house. In this podcast, you will learn about data storage best practices for disaster recovery, whether or not tape backup is still a good choice for your organization, and about the pros and cons of cloud disaster recovery.
Listen to our podcast on disaster recovery storage, or read our transcript below.
Let’s start with the basics. Can you offer some disaster recovery storage best practices?
There are a number of technologies we can leverage from a best practices standpoint. Obviously, each technology will have to fit the need, and we’re talking about the volume of data that’s backed up. Traditionally, we’d use tape, today, we’d use more and more disk. And some instances, in small environments, you can use removable media. With that said, the one common element to all of these is that data needs to be taken elsewhere, offsite, or away from the main location.
There are two things you want to protect your data from: obviously system failure, which is why you copy the data, but further than that, there could be a disastrous event affecting your facility, causing loss of your IT environment. It doesn’t matter how many backups you have on site at that point, if the data was not moved from that location, the data is lost.
Is tape backup still relevant for disaster recovery?
We hear a lot of talk about tape going away. I would say that in a sense we’ve improved technology. That said, there’s some significant investments that were made in tape technology, and it is still relevant today, as much as it is an older technology. You can think in terms of smaller environments … because they have the same data protection requirements as larger companies in the sense that they need to protect their data, it needs to be available for them to do their business. That said, they don’t have the same budget. And a lot of times, if you made an investment a few years back in tape technology, and if it is still working for you, it is still very relevant. It is a question of how much data you’re backing up and how quickly you need it restored.
And that is a very important question when you’re talking about disaster recovery. You need to plan your strategy in terms of how quickly you need things restored. Obviously, if you need things restored instantly, tape technology may not be the answer in most cases. If you need failover-type scenario where you can’t have any downtime, tape is not your answer. You need to start looking at data replication and clustering, and so forth. Tape is not conducive to supporting that type of technology.
What do you need to make remote replication effective? For example, is WAN optimization necessary?
WAN optimization is definitely a plus. We talked earlier about the volume of data—that’s where it’s starts counting, because we need to realize that WAN optimization devices are not cheap. That technology comes at a price, and it's only really valid when you have large amounts of data you need to replicate. When we’re talking about block-level replication, we’re minimizing the amount of data we’re replicating. We’re not doing full mirrors all the time, we’re not copying (all the data), on top of that, we’re hearing about deduplication coming into play.
Deduplication uses a similar pattern as WAN optimization. So it is useful in some cases, but not necessary all the time. One thing I need to stress here when we talk about data replication. Data replication is meant to take your data elsewhere, take your data offsite. We tend to focus too much about taking the data out of here, but we lose focus on what happens when that data needs to be used in event of a disaster.
If the connectivity is poor, or restricted enough to need something like WAN optimization to get our data across the wire, what happens when we try to get it back if we had a major site disaster? It becomes an issue—you need to think about that: it’s not all about backing up the data, it’s about being able to use it following a disaster. That’s the ultimate goal here.
If you’re trying to bring back an entire data center after it was destroyed, well, now we have serious issues and we need a lot of bandwidth to bring it back. It’s a very important point to consider.
What are the pros and cons of cloud disaster recovery? And, are cloud disaster recovery services really just for SMBs or is it a relevant approach for the enterprise?
For smaller companies, if you remember a few years back, we had Mozy purchased by EMC, these are useful for small amounts of data where you can send it across the network to a home connection. When you’re increasing the amount of data you’re backing up, because the backup itself is incremental or block-level, or file-level, depending on how its done, that’s all fine and dandy, the data is protected. Accessibility when you need the data becomes the issue,
In a very large company with terabytes of data that is stacking it up to a cloud service provider, you need more than that, because you’re going to want to access that data from a specific location, and the type of pipe between your chosen recovery location and where the data is needs to be taken into consideration. You’ll need a lot of bandwidth.
When you start looking at enterprise-level backup, you have to start considering where I will recover my systems, my applications and how will I access that data. And a lot of times it brings you to the conclusion that not only is it okay to back up your data on the cloud, but you should have more services on the cloud to access that data. You still want your applications relatively close to your data. So, large data volume, it goes beyond simply cloud backup. We need to start talking about the infrastructure of the service, platform of the service on the cloud, and how to bring our infrastructure closer to our data.
What about other service providers—can a company outsource DR entirely rather than just storing data in the cloud?
If you look at companies like SunGard and IBM, these companies have had some very successful services providing DR. They are basically providing an alternative recovery site; your data is being replicated; they have standby equipment that is ready to be deployed, sometimes even hot equipment for where replication is taking place. And today, some cloud service providers are starting to offer similar levels of service. There’s definitely a big plus to outsourcing your DR, as opposed to just the backup, especially in very high volume. But it’s also useful for the SMB that doesn’t want to deal with the cost of the standby infrastructure. You’re buying into a service, it becomes an operational cost, it becomes an insurance policy.
On the flip side, why not just keep DR in-house?
Obviously, running your own DR has its own advantages. Especially if you’re in a high-security, highly confidential, protected-type of data, sending your data to the cloud, to a service provider that is not strictly dedicated or cannot provide you the level of comfort from a security perspective might force you to own your own DR. The downside to that is obviously the cost. Now there is a capital cost you have to take into consideration. It really depends on the type of IT budget as well, the skills you have, the type of facility you have. It may be relatively easy for you to own your own IT infrastructure if you have, for example, three data centers in three different towns and your company is big enough to have that. You can start leveraging each facility as DR for the other facilities.
With that said, we need to take into consideration the element of capacity. You don’t want to be spending money on idle equipment, so you tend to leverage that equipment to make the best out of your IT investment and not have things dormant.
This is where the concept of the private cloud comes into play. The public cloud is obviously when you deal with a service provider. You can create your own private cloud and leverage your own private infrastructure for DR. It is capital expensive, so it is not for everyone. I would say that smaller companies gain a lot by outsourcing DR.
Pierre Dorion is the data center practice director and a senior consultant with Long View Systems Inc. in Phoenix, Ariz., specializing in the areas of business continuity and disaster recovery planning services and corporate data protection. Over the past 10 years, he has focused primarily on the development of recovery strategies, IT resilience and recoverability as well as data protection and availability engagements at the data center level.
This was first published in June 2011