Sergey Nivens - Fotolia

Manage Learn to apply best practices and optimize your operations.

Storage disaster recovery plans must account for weather threats

Do your storage DR plans factor in weather-related issues? Sometimes, a low-tech approach to protecting your precious data works better than the cloud.

This article can also be found in the Premium Editorial Download: Storage magazine: Solving enterprise-class storage challenges in the age of convergence:

Organizations located around coastal areas tend to view the onset of summer with equal parts of happiness and dread. IT planners are no exception. In addition to juggling staff vacation schedules, managers must concern themselves with often violent weather and dicey electrical service.

I know software-defined storage (SDS) and hyper-converged infrastructure (HCI) made short work of availability concerns, at least when it comes to storage disaster recovery. And clouds are a miraculous conflation of technologies that keep operations and data "high and dry" when, for example, the local levee breaks.

But these assumptions are not always true. A few years ago, Hurricane Sandy taught many firms located along the Atlantic Coast that their data -- whether hosted on disk or flash drives, or in high-availability topologies -- was still at risk. The reality is that most hypervisor-controlled SDS or HCI platforms implement storage architectures that do the following:

  •  create storage silos that cannot effectively be shared or managed, especially when heterogeneous hypervisors are used; and
  •  reintroduce "identicality" requirements into the data protection game.

Silos of storage are a problem

Siloed storage, while it might simplify storage management behind an individual server, demonstratively degrades overall allocation and utilization efficiency across heterogeneous infrastructure with multiple SDS stacks controlled by different hypervisor software brands.

In such environments, IT administrators find themselves confronting a confusing mishmash of data protection microcosms that must be coordinated and tested using more complicated strategies. As a result, data protection and storage disaster recovery (DR) have become more complicated and difficult to manage.

The identicality drawback

Identicality refers to a storage mirroring requirement in which identical storage media and equipment must be used by the replicating infrastructure. It means consumers must stand up gear at DR facilities that is identical to what they have in their production environments to make data replication or migration possible. This can be challenging, primarily when using commercial hot sites -- traditional data centers offered on a subscription basis -- or cloud-based storage disaster recovery services.

Identicality was once the bane of DR planners, when proprietary storage array manufacturers designed their rigs to prohibit the easy copying of data between arrays from two or more vendors. Due to the availability of hypervisor-controlled SDS products, identicality is back with a vengeance: In some cases, you can't use storage controlled by hypervisor brand X to store data copied -- backed up, snapped or replicated -- from storage controlled by hypervisor brand Y.

Many colocation processing and outsourcing providers have seized on DRaaS as a way to generate more revenue and not because they know anything about data protection or DR.

You cannot readily dismiss or rectify the challenges associated with identicality. Some third-party data protection software vendors, such as Acronis and Arcserve, continue to advance a hardware- and hypervisor-agnostic approach for data protection, in which it is instantiated as a service that can be pointed at data wherever it is stored. This assumes that an administrator exists in the IT organizational chart to set up and maintain this service. That's an increasingly unlikely assumption, as specialists are being replaced by "virtualization administrators" who are only versed in their hypervisor technology.

The cloud approach

Of course, one deployment option for universal data protection services is to source them from a cloud in the first place. IBM and others have created cloud deployment models for their data protection software. Big Blue and its surrogates can work with enterprises to deploy client software to production servers that will provide customers with a more universal data protection strategy.

This solves some deployment issues, but doesn't necessarily fix everything.

Just having cloud-enabled backup software does not make a service provider a disaster recovery as a service (DRaaS). It certainly doesn't make the vendor an expert in the nuances of storage disaster recovery planning. Many colocation processing and outsourcing providers have seized on DRaaS as a way to generate more revenue and not because they know anything about data protection or DR. One provider was recently found to be discarding the customer data they were being sent, apparently playing the odds that customers would never ask for the restoration of backup data.

Even if a DRaaS provider is sincere in its desire to deliver protective services, many lack the bandwidth to move data back and forth between the customer and the cloud -- particularly if a regional calamity occurs. Backup data can amass over time at the service provider, but retrieving it all at once can be a nightmare.

A minimum link speed of 10 Gbps is required to move just 10 TB of data between locations within an acceptable two-hour-and-15-minute timeframe. But such links are expensive when you're talking WANs. The price is more affordable using a metropolitan area network such as Multiprotocol Label Switching or Synchronous Optical Network services. The problem with these networks, however, is that they tend to be "crosstown" rather than "cross-country."

The Hurricane Sandy effect

Hurricane Sandy had a massive 210-plus mile diameter. Folks with flooding data centers in New York City were rightfully concerned when they learned of water encroaching on their hot site or cloud DRaaS service provider in Philadelphia. Today, there is still a good chance that a modestly priced DRaaS provider is hosting your backup data within 50 miles of your facility. At that distance, both your original data and your backup could be at risk from a similar storm.

The solution is to use a portable storage medium -- tape, ruggedized disk or even modular flash -- to make data copies and then ship the data en masse to a secure facility a significant distance away from your, or your cloud provider's, data center. As a rule of thumb, a minimum safe distance between original data and backup data should be at least 62 miles (100 kilometers), though recent weather events suggest double that distance might be wise. I know it sounds low tech, but the strategy has wheels under it and is now being supported by an increasing number of public cloud providers as a strategy of cloud seeding. Hope you have a calm weather season in 2017.

Next Steps

Tips to help you recover from a disaster

Business continuity trends affected by a number of factors

Incorporating the public cloud into your data protection strategy

This was last published in June 2017

Dig Deeper on Disaster recovery storage

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What are some other methods of protecting data from being lost in a disaster?
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDataBackup

SearchStorage

Close