Tijana - Fotolia


Disaster recovery costs: Making testing and planning cost-effective

DR can often seem daunting to many storage administrators because of its cost. Here are nine ways you can get more cost-effective disaster recovery.

For many, the biggest inhibitor to implementing an effective disaster recovery (DR) plan is cost. The shock that...

is often associated with the price of a proposed DR solution becomes something that organizations strapped for funds are simply unwilling to swallow, so they may end up stopping a disaster recovery project in its tracks.

The question then for many data storage administrators is often, "Okay, how much DR can we buy for this amount of money?" rather than, "Here are the capabilities we need, let's find a way to make it affordable." The goal for every company trying to compose a cost-effective disaster recovery plan should be to understand the range of cost-saving options and the associated tradeoffs, and revise their disaster recovery strategy based on the choices deemed most acceptable.

In this article, learn about where you can look for efficiencies that will not compromise disaster recovery. Learn about where you can find opportunities for savings in areas like disaster recovery testing and new technologies that will reduce disaster recovery costs.

Here are some areas to explore to realize more cost-effective disaster recovery:

1. Eliminate idle assets. One of the greatest cost contributors to disaster recovery is the expense of maintaining assets that largely sit idle waiting for a disaster to happen. For years, it was not uncommon to see servers and data storage sitting unutilized in a disaster recovery facility. Today, very few organizations can allocate funds for such unused capacity. Devising a plan that allows systems to be multi-purposed to any degree can dramatically improve the disaster recovery cost structure.

One of the most common approaches is to leverage test and development environments as backups for disaster recovery. The challenge here is determining how long these functions can be unavailable during a disaster situation.

2. Standardize. The more different types of widgets that are deployed, the greater the number of distinct types of resources are needed for disaster recovery. Likewise, the number of different configurations and system software variants of the same platform make DR design and testing more costly. Limiting variants of platforms and other infrastructure components and defining standard configurations reduces complexity and unnecessary costs.

3. Automate. Beyond standardization, the use of automation, where possible, can simplify the testing process, improve reliability and drive efficiencies by reducing the number of hours necessary to complete tasks. Automation opportunities exist in areas like system deployment, data replication, and host or application failover.

3. Virtualize. Perhaps the greatest potential opportunity to drive down DR planning and testing costs today is offered by virtualization. Server virtualization mitigates the idle assets problem and can greatly assist in efforts to automate the disaster recovery process, not only reducing costs, but offering the potential for improved levels of service.

Storage virtualization also has a role to play, but other than adding a common management layer, it's simply another way to offer things like data migration, data replication and snapshots.

5. Operationalize All too often the DR process is treated as an exception, something that exists somewhere on the fringes of IT rather than as part of the day-to-day IT operations. By better integrating disaster recovery into the core functions of IT and thinking about DR as part of application development, architectural design and operational planning activities, more efficient DR solutions can be implemented. Creating after-the-fact or one-off solutions that must be force-fitted and then exist as exceptions are costly and become difficult to manage and maintain.

6. Document. A traditional weakness in the realm of disaster recovery is the availability of current and comprehensive documentation. What does this have to do with cost? Besides the real risk of extended downtime and delays in the event of a disaster, poor or missing documentation can contribute to cost overruns in disaster recovery testing and can require extended time from senior resources where, with proper documentation, more junior personnel could get the job done.

7. Simplify. Complexities that drive up costs have a way of creeping into organizations if steps aren't actively taken to avoid them. Previously mentioned factors like standardization and virtualization can go a long way towards simplification, but there are other ways to simplify. For example, in many cases, customers implement discrete point solutions, such as one-off host-based replication or backup solutions, to support the recovery of a specific application, which is often based on application vendor recommendations. In extreme cases, this can result in multiple data backup or replication technologies that each must be supported and managed by IT. While unique application requirements must be considered, supporting multiple recovery solutions can become a DR management nightmare.

8. Compartmentalize testing. Disaster recovery testing can be a highly disruptive event that impacts day-to-day operations, raises overtime costs and generally increases anxiety. While large-scale DR testing is essential, consistent compartmentalized testing of networks, servers, storage and, to some degree, application components can help ensure recoverability while helping to reduce the disruption and avoid unplanned expenses related to failed or delayed DR tests.

9. Optimize data. When it comes to reducing or avoiding DR costs, less can be more. Technologies such as data deduplication and thin provisioning reduce underlying data storage footprints and can also significantly decrease bandwidth requirements needed for data replication -- thereby making DR significantly more affordable. Another often overlooked data optimization practice that can reduce DR data footprint and traffic is data archiving. It's important to consider that from a data perspective, disaster recovery is primarily concerned with currently active data sets, but the reality is that often the majority of data sitting on today's storage arrays is non-current, historic data that has accumulated over time. A program to purge unneeded data or move it offsite to a cloud or secondary repository could greatly reduce the data storage capacity required at a DR location and may likely even help speed up the recovery process -- not to mention the savings associated with freeing up expensive primary storage at the primary location.

In the end, disaster recovery is a type of insurance that people invest in, and then hope that they never need. Regardless if you never think you'll use your DR plan, it's important to recognize the necessity of being properly protected. The key is to buy enough, but not too much, and the items listed above represent areas to consider when making long-term planning decisions. Determining the right approach requires taking the time to fully understand the problem before diving into specific technical solutions. Once the problem is understood, however, the technical nuances of different solutions become critical and can impact disaster recovery costs substantially. The good news is that the cost of the core technologies that support DR are more affordable than ever. Ultimately, the challenge is that cost-effective disaster recovery is more a matter combining the right technologies with the right policies and processes, and this is where organizations frequently come up short.

About this author: Jim Damoulakis is CTO at GlassHouse Technologies, a leading independent provider of storage and infrastructure services. He can be reached at [email protected]

Dig Deeper on Disaster recovery planning - management