Storms like Sandy show that despite an increased dependency on automation, there's no replacement for a sound disaster recovery planning process in your data storage environment.
If you're reading these words, the good news is that the doomsayers advancing the Mayan prophesy have been proven wrong. To wit: December 21, 2012, has come and gone, and the planet didn't change its magnetic poles or eject its crust, thereby expelling you, me and all our mismanaged data into the icy void of space.
I would argue that this is, for the most part, a good thing. Frankly, I've never experienced an apocalypse, but from what I've seen in movies and read in books, it seems like the "end of days" would be most unpleasant. So, that's the good news.
The not-so-good news is that we may have more to worry about than pre-Colombian prophesies. You might remember that late season hurricane/tropical storm/superstorm called Sandy that visited its own dystopian reality upon the residents of the N.J./N.Y./Conn. tri-state area in late October. Counting that natural disaster, we've now seen several consecutive years of "once-in-a-century" weather events, providing what some climatologists regard as empirical evidence of a mounting planetary problem.
This problem has less to do with climate change than it does with the consequences of a collision between severe weather and poorly conceived ideas of civil engineering, architecture, electrification, transportation and distribution, and urban planning. As the old saying goes, "Into each life some rain must fall." The problem is that so much of our stuff is built on sand -- or it's at least at or below sea level. Given the warnings that appear in the texts of Matthew (7:24-27) in the Bible, we've apparently been building our stuff on sand for quite a while.
I started a countdown clock after Sandy hit the beach in New Jersey to see how long it would take for some server hypervisor vendor or cloud service provider to spin a yarn about how their technology provided a life raft or something for some business just as the sea reached the lobby. Within 24 hours, the expected story appeared in my inbox, sent by an enthusiastic PR flack. Changing all the names of the principals, the story went like this:
Cloud Provider X saved Engulf and Devour Company from certain demise by providing a location where all of the company's data could be transported and kept safe from the inclement weather. Engulf and Devour acted promptly two days before landfall to establish a network interconnect with Cloud Provider X and to copy all its data to cloud storage provisioned by the vendor. After approximately 48 hours, data was successfully transferred and could be accessed by apps and end users from its new storage location in the cloud. This, the PR representative offered, proved the value of cloud storage as a way to replace the traditional disaster recovery (DR) planning process with state-of-the-art high availability (HA).
On its face, this sounded like an impressive case study. Digging into the details, however, my initial interest receded more quickly than Sandy's tidal surge. It turned out that Engulf and Devour's entire complement of data comprised 1.8 TB. They were adding a couple of hundred gigabytes to this store every day or two. Even with this smallish amount of data to protect, copying it over to the cloud service provider required about two days. An LTO tape backup would have taken, at most, about two hours. Copying the data to a second hard disk, say a 2 TB removable SATA drive, may well have taken even less time. Depending on how far away the physical facility of the cloud service provider was located, couriering over a tape or a disk drive would likely have taken less than 24 hours. So, the story of the miracle of "across-the-wire HA for data" began to seem a tad less miraculous to me.
The moral of this story is simple. Clouds, virtualization and so on are often represented as silver bullet solutions for processes like data protection. I hear many vendors argue that these are part of a new generation of HA technologies that put older concepts like disaster recovery and business continuity (BC) out to pasture. This is marketecture, since HA has always been a tool within the toolkit used by DR/BC planners. The recovery/continuity planning process seeks to apply the right tools (HA, tape backup, etc.) to the right targets (business processes, and the applications and data that serve them) based on their business value and sensitivity to disruption. Used judiciously, and with a common-sense perspective about alternatives, HA can provide value; applied indiscriminately, HA just makes everything cost more without contributing any greater protection or availability to the user.
I hope your 2013 will be disaster free. But I also hope that you'll be able to institute a common-sense business continuity planning practice in your workplace if you lack one today. Whatever you do, don't buy into the rhetoric of the tech peddlers. Disaster recovery isn't obsolete; given the increased dependency on automation to make fewer staff more productive, such planning has never been more urgent than it is today.
About the author:
Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.
- Focus: Disaster recovery planning and virtualisation –ComputerWeekly.com