Five Deadly Sins of Disaster Recovery Planning
Common blunders that result in data recovery disasters.
Disaster recovery is one of those IT functions that can never be fully completed. No matter how good you think your program is, it can always be made better. And the fact that its worth cannot be proven except when confronted by extreme circumstances makes it difficult to devote time, money and resources to the cause.
Yet that's exactly what is required on a regular basis. As Forrester's Rachel Dines pointed out recently, DR is the equivalent of running a marathon - only those in tip-top condition will see the finish line. Unfortunately, virtually no one out there runs a full DR test even once a year. At best, organizations test individual components to ensure continuity of various application subsets, mainly out of fear that more comprehensive trials will upset ongoing business processes.
And that's only among organizations that have DR plans to begin with. And the fact is that too many enterprises are still without a means to recover from even minor disruptions let alone the kinds of calamities that make the evening news. The cloud has proven to be a friend in need for groups that can't afford backup data facilities and advanced DR platforms, but there are still many pitfalls when it comes to maintaining continuity in the cloud. As CSO's Gregory Machler notes, many firms make the mistake of striving for broad load balancing among public and private resources so that no matter what happens, there will always be resources available from somewhere. This is trickier than it sounds, however, as applications tied to specific servers, IP addresses, DNS mappings and the like will likely crash if the underlying infrastructure were to suddenly shift. A much better solution is the hot-cold data center approach so transitions can be charted out ahead of time.
The cloud's chief advantage, of course, is scalability, which allows companies to ramp up DR resources without breaking the bank. Lately, attention has shifted toward integrating these massively scalable solutions with the often heterogeneous server, storage and networking environments that exist in many enterprises. A company called InMage, for example, has tailored its vContinuum system to provide a cohesive environment across Windows, Linux and UNIX servers coupled with DAS, NAS or SAN storage. The idea is to provide block-level asynchronous replication of VMware virtual machines with little or no disruption to production servers.
At the same time, in-house DR technology is becoming cheaper and easier to deploy. Veeam Software has tapped SAN developer Coraid to create an integrated data protection, disaster recovery and management solution that can be deployed at a fraction of the cost of traditional Fibre Channel systems. The system combines Coraid's massively parallel 10 GbE EtherDrive and Veeam's Backup & Replication system to enable vSphere and Hyper-V off-site replication and rapid recovery, utilizing source-side deduplication to limit file size even as overall capacity scales into the petabyte range.
This probably isn't the first time, nor will it be the last, that someone tells you disaster recovery is too important to push to the back burner. But as both cost and complexity of DR solutions continues to diminish, so too will the institutional resistance to establishing a comprehensive program. Once in place, however, it will be up to IT to ensure it sees regular testing and updating as data environments evolve.