2010 has been an interesting year for disaster recovery planners and IT practitioners in general. While the constant chatter about economic doom and gloom seems to have come to a long-awaited end, it hasn't exactly been replaced with exuberant talk of economic recovery. The global recession has forced many IT organizations to look at ways to get things done on a reduced budget, and has created an opportunity to investigate different
Data center outsourcing: Colocation data centers and more
Risk transfer has long been recognized as an acceptable type of response in the risk management industry, and data center outsourcing is a growing type of risk transfer. Companies are choosing to focus on their core competencies, and opt to leave data center operations and management to specialists. The cost of maintaining a data center facility can be absorbed as a business cost by large organizations. Data center outsourcing is also starting to be used by smaller organizations.
Data center outsourcing comes in many forms and can include:
- Collocation of the disaster recovery environment for critical systems
- Colocation of the critical production environment
- Outsourcing the entire IT function through managed IT services
Data center outsourcing has many advantages, giving smaller companies access to facilities with power and cooling redundancies, increased security, 24/7 monitoring, and support, which are usually capabilities they couldn't afford on their own. You can expect that trend to increase in 2011 as many smaller and older IT server rooms run out of capacity.
Cloud disaster recovery
We have been hearing a lot about cloud computing in 2010 and can expect to hear a lot more moving into the new year. According to the Gartner Hype Cycle report released in October 2010, cloud computing has now reached the peak of the hype cycle and predicts that it will become mainstream in fewer than five years. A number of companies began testing the cloud concept in 2010 with small components of their IT environment, including cloud backup. While no one is jumping to build their entire disaster recovery strategy around the cloud, we will see more companies in 2011 leveraging what can be considered yet another form of risk transfer as mentioned earlier. Disaster recovery will likely be a preferred point of entry for many as it allows the ability to test in relatively low-risk environment.
Some cloud disaster recovery offerings are now available, including IBM Corp.'s Managed Backup Cloud, that's mostly focused on data backup/replication to the cloud, and Geminare Inc.'s iCloudRecovery (or Recovery as a Service), which takes it one step further by offering both data replication and server failover capabilities leveraging virtualization, to name only a few. There are also offerings built on availability or uptime, such as Microsoft Business Productivity Online Standard Suite (BPOS) and Salesforce.com. Moving certain applications to the cloud such as CRM or email to ensure their availability is definitely aligned with a disaster recovery strategy, but with a stronger focus on availability rather than recovery.
Some obstacles remain, however, and security and privacy concerns are probably the biggest hurdle. By design, the cloud can be anywhere and so can your data, so companies will want assurance that confidential information doesn't end up in the wrong hands. However, the U.S. government recently migrated its Recovery.gov website to Amazon.com's EC2 commercial cloud service. This high-profile adoption of the cloud concept will likely help convince other organizations cloud disaster recovery is a viable option.
PDAs and Smartphones
Devices such as the Blackberry, iPhone and Android have definitely become embedded in everyday life at home or at work; whether used for email, texting and, more recently for video calls, these handheld devices have creating a problem for IT managers.
Cheaper network bandwidth, WAN optimization, data reduction (deduplication) and more affordable software are all contributing factors to data replication becoming a preferred DR method for many.
With users increasingly leaving laptops behind in favor of these devices, service interruptions are becoming unacceptable. This forces organizations to reconsider the recovery priority and availability of enterprise systems that support handhelds. For example, The U.S. federal government has tried to incorporate more mobile devices in new areas, such as information gathering for the 2010 Census to wirelessly send data to their database, according to the Government Computer News. Also, handhelds are now being used for field inspections with the Department of Agriculture, according to the same source. While it increases centralized processing and raises the criticality of the supporting infrastructure, the devices themselves typically store a very limited amount of data and usually little to no critical data. This makes them relatively easy to replace and definitely easier to backup.
So far, handhelds have played a limited role in disaster recovery strategies, and have been mostly leveraged for basic communication functions such as voice calls, email, texting instant messaging and sending visual data. However, we can expect this technology to start playing a much greater role in DR in 2011 and beyond. For example, Geminare, which was mentioned earlier, allows its customers to manage their replication failover server infrastructure directly from their iPhone. This is only the beginning and we will see a lot of Web-enabled management interfaces ported to handhelds.
Data replication is nothing new and has definitely become part of a growing number of disaster recovery strategies. We are seeing more companies using remote data replication as a primary line of defense. Tape still remains widely used, but the unprecedented amount of data being backed up and added to retention policies that are sometimes overly generous has created massive tape storage environments that are difficult to manage. Cheaper network bandwidth, WAN optimization, data reduction (deduplication) and more affordable software are all contributing factors to data replication becoming the preferred disaster recovery method for many.
Automation of the disaster recovery process
Server virtualization has become a key component of many disaster recovery strategies over the past few years; among other things, it offers the ability to deploy a standby IT environment at a fraction of the cost, which wouldn't have been possible just a few years ago. The technology is part of disaster recovery environments, and also supports many primary or production IT environments. This has led to the availability of tools that automate the movement of server images across physical systems without interruption, thanks to shared centralized storage. Add to the mix remote data replication, and companies now have the ability to automate the recovery process that has traditionally been dependent on disaster recovery plan documentation. This is not to say that all IT organizations are moving to high-availability clustering and instant failover, but using storage/server virtualization and data replication together is allowing companies to maintain functional recovery environments without having to develop and maintain lengthy disaster recovery plan documents. Expect to see this disaster recovery trend grow as we move forward.
About this author: Pierre Dorion is the data center practice director and a senior consultant with Long View Systems Inc. in Phoenix, Ariz., specializing in the areas of business continuity and DR planning services and corporate data protection.