Managing and protecting all enterprise data


Distance yourself from disaster

Long-distance replication is reachable with new optical and IP storage networking technologies.


Extended SANs aren't cheap
Storage area network (SAN) extensions can use IP or Fibre Channel (FC) as the transport mechanism and twisted pair as the transport medium. Using IP for transport is the least expensive method of deploying these types of SANs.

SAN extension pricing - using IP and Gigabit Ethernet as the transport mechanism - ranges from $50,000 to $100,000, according to Steve Duplessie, a senior analyst with the Enterprise Storage Group, Milford, MA. "This type of solution involves using the FCIP, [Fibre Channel over IP protocol] an IP device at each end of the extension that, implementing FCIP, encapsulates an IP envelope and de-encapsulates it at the other end of the SAN." The encapsulated FC data runs over IP and Gigabit Ethernet on twisted pair. It can also run over the Internet using VPNs. Additional expenses involve contracting with a managed service provider for IP services.

San Jose, CA-based Nishan Systems, for example, offers this type of SAN extension for $30,000 to $50,000 according to Randy Fardal, vice president of marketing. This solution involves a Nishan switch that implements the FCIP protocol and an optical transceiver that can permit a range of 80km between the primary and the backup data centers.

SAN extensions, using FC technology throughout - including for transport of the data - are considerably more expensive than a hybrid of FC and IP/Gigabit Ethernet. For example, ADVA's Coarse Wave Division Multiplexing (CWDM) optical solutions start at $100,000 for four wavelengths - the least expensive optical solution. The company's dense wave division multiplexing (DWDM) solutions, according to Abdul Kassim, vice president of marketing, start at $310,000 for four wavelengths, $570,000 for eight wavelengths, and top out at $1.1 million for 32 wavelengths.

In addition to these optical equipment costs, prospective buyers need to budget for integration costs and lease fees for fiber that typically run for $1,000 per mile per month and can be five times this price in areas where fiber is scarce. License fees for management software for the optical equipment add another $10,000 to $20,000 to the total.

These prices all assume the prospective buyer has a FC SAN in place at a site running replication and backup applications. Adding a SAN at a second backup site can add $500,000 or more to the total cost.
@exe Prior to Sept. 11, 2001, few companies other than large financial institutions located in major metropolitan areas implemented sophisticated disaster recovery/business continuance strategies that included remote copies of data. In the aftermath, many companies are looking at how they can put some distance between their data and disasters.

In the harsh glare of 9/11, storage managers are taking a closer look at optical and IP storage networking technologies that is making distance backup of storage area networks (SANs) more affordable. Make no mistake, the technology is still expensive, but prices are starting to come down from the stratosphere.

Learning from a tragedy
One brokerage firm (who requested anonymity for this article) that had offices in the World Trade Center, increased the distance between its new dual data centers by using a refined Fibre Channel (FC) technology. The technology uses a flow control mechanism called buffer credits, which lets storage traffic travel beyond the 10km distance limitation imposed on FC. The number of buffer credits a FC switch is capable of generating determines the distance that can be spanned. If the FC pipe is filled with in-flight buffer credits, there's no degradation in performance for distances of up to 130km.

The brokerage firm was prepared for 9/11 because it had undertaken a disaster recovery initiative in anticipation for terrorist attacks during the millennium celebration in Times Square.

But, even with this preparation the company still suffered from the 9/11 attack, and as a result, scaled up its previous efforts.

After 9/11, however, the company had a replicated backup data center in its four sites across the globe. Each data center implemented a hub-and-spoke model, where client sites were connected to each of the data centers.

Prior to 9/11, the company's primary and secondary data centers in New York City were only a few blocks apart. Now, these distances have increased to 50, 60 and, in some cases, 100km. "The locations that were only a few blocks apart," says an official of the firm, "have been sold off and we've moved to geographically dispersed locations outside the city; we're trying to get distances of cities between us."

If the firm experiences another major disaster, it says its storage infrastructure could be up and running in a matter of milliseconds.

Of course, New York isn't the only city where companies are implementing aggressive disaster recovery/business continuance plans. ADVA Optical Networking's Brian McCann cites one of his company's customers in Chicago that's erecting a building to house a data center because the customer was concerned that the Sears Tower - which is close to the customer - might suffer the same fate as the WTC. Initially, the customer was going to locate the backup data center close to the primary data center. But, after reviewing the specifics of the WTC attack, they located the backup data center 50km from the primary and the Sears Tower.

The technology
Optical networking technology has let customers extend SANs over distances more than 10km. One of the first disaster recovery solutions introduced by EMC and its partner, Computer Network Technology (CNT) of Minneapolis, MN, involved shipping the storage traffic over a WAN using the SONET and asynchronous transfer mode (ATM) protocols. EMC and CNT worked out a solution that involved EMC's SRDF hardware and software, and CNT's UltraNet Director. Although this solution let users replicate storage data between SANs located in data centers hundreds of miles apart, it was expensive, costing in the neighborhood of $200,000.

SAN vendors refined this initial solution to cover MAN distances, working with optical networking partners using both WDM (wave division multiplexing) and DWDM (dense wave division multiplexing) to traverse distances of up to 120km.

As a result remote data centers in other cities can participate in the SAN. Networking companies such as ADVA, Nortel Networks, and ONI Systems have forged partnerships with storage companies such as EMC, IBM, Inrange, Hitachi Data Systems, and Brocade Communications to create these optical network storage solutions.

New solutions on the horizon
Another technology being introduced are new chips that let companies extend their SANs. According to LightSand Communications, Milpitas, CA, their recently announced S-2500 and S-600 SAN gateway products will let organizations connect SAN fabrics over distance via SONET networks.

The foundation of these products is the company's OPX chip, a multiprotocol switching device that supports Fibre Channel (FC), Gigabit Ethernet and SONET on one processor. The OPX supports FC over SONET as well as FC over IP networks. The OPX chip connects FC and Gigabit Ethernet (including iSCSI) on the local data center side, while using SONET or IP over the WAN. With the OPX, users will be able to multiplex FC and Gigabit Ethernet onto one SONET link. OPX's technology supports FC's credit buffering scheme for 100MB/s FC links at distances up to 6,000km.

The company claims that the S-2500 does this while operating at speeds of 2.488Gb/s over OC-48c networks, while the S-600 operates at speeds of 622Mb/s over OC-12c/STM-4c networks. According to the company, when used in conjunction with DWDM extension equipment, the S-2500 can help improve bandwidth utilization as much as 200%, providing up to three FC signals per wavelength.

Several companies, including Nishan, Adaptec, Dell, Hitachi, IBM, QLogic, and Qwest, have tested the feasibility of transporting block storage data across the country. Called the Promontory Project - named after the town in Utah where the golden spike connected the two halves of the continental railroad - the project team connected two SANs located in Sunnyvale, CA to a third SAN, located in Newark, NJ, using Nishan IP storage switches connected to one OC-48 channel of a OC-192 backbone donated by Qwest. Over a two-month period, the team demonstrated speeds up to 215MB/s - bidirectional - and round trip latency of up to 80 milliseconds.

In addition, Hitachi demonstrated, in real-time, a synchronous replication of data at a distance of 5525km using its TrueCopy replication software.

Not all industry experts accept the results, however. Nick Allen, vice president of storage research, Gartner, Stamford, CT, believes the experiment doesn't simulate real-world conditions. He points out that a considerable amount of cache was used to speed up the system, and even large companies could not justify the cost of the OC-192 circuit used in the experiment.

However, the significance of the Promontory Project was that it broadened the horizons for backup and replication, according to Randy Fardal, vice president of marketing at Nishan. "Nobody thought that you could go such great distances at high speed, but we were able to go at wirespeed [200MB/s] and do it across the country on a single GigE connection," he says.

The future
According to U.S. government officials, more terrorist attacks will occur on U.S. soil, and some may be aimed at our networking infrastructure. But, on the positive side, the technology will be available to protect the data assets of companies. Additionally, new solutions are in the pipeline that will make the job easier and cheaper.

Article 11 of 23

Dig Deeper on Disaster recovery planning - management

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

Get More Storage

Access to all of our back issues View All