For all the wonderful things that abstract, distributed data architectures can do, they still can’t operate when the lights go out or when some malfunction or another pulls the data center offline.
This is where disaster recovery (DR) comes in, of course, and as the digital economy continues to encroach upon every facet of our daily lives, it is up to the enterprise to ensure that when—not if but when—systems go down, users will regain access to their data within a reasonable amount of time.
Like everything else in the enterprise, however, this task has grown more complicated in the face of rapidly evolving data environments. Cloud computing, Big Data, data mobility and a rash of other initiatives are making it extremely difficult for organizations to control their daily production environments let alone their reserve infrastructure that likely sees very little real action.
Among the few emerging trends that provide a little clarity to DR development is the fact that designs are moving away from the primary and recovery site approaches of the past, says Enterprise Storage Forum’s Drew Robb. The goal these days is to quickly migrate workloads from site to site so that the loss of one piece of the infrastructure is mitigated by the rapid transition to another. The downside is that in many ways, this is even more difficult than maintaining a single recovery site and if it’s not designed and maintained properly, it can eventually become more costly as load volumes and complexity increase.
This need for broad flexibility is also why the Disaster Recovery as a Service (DRaaS) market is heating up. According to MarketsandMarkets, the DRaaS field is on pace to hit $11.92 billion by 2020, nearly a 53 percent compound annual growth rate starting from today’s $1.4 billion. These days, the market is characterized by the ability to replicate virtual servers over long distances in order to offset the loss of revenue that extended downtime has increasingly come to represent. DRaaS is particularly appealing to small to midsize organizations that lack the budgets and in-house expertise to maintain recovery operations on their own and are turning to the growing number of cloud-based solutions that are tied to ongoing bulk storage services.
Disaster recovery is also one of the primary drivers for both private and hybrid cloud architectures despite the cost and complexity premiums these solutions represent over public resources, says hosting service provider Logicworks. By forging common architectures across internal and external resource sets, the enterprise gains a high degree of flexibility when it comes to migrating, mirroring and replicating data and data architectures across multiple geographic locations. In this way, even critical data and applications can be maintained under a unified management and security structure regardless of where the data is housed and continuity can be maintained even in the event of a significant natural disaster that pulls large amounts of resources offline.
Some might wonder whether Big Data and the Internet of Things are about to upend this cozy arrangement before it even hits the enterprise mainstream. But according to Database Journal’s Lockwood Lyon, it can only if DR is not incorporated into the Big Data architecture from the ground up. The danger is that most organizations do not view Big Data as a mission-critical undertaking and therefore not worthy of top-notch recovery, but this actually leads the enterprise into a trap: Big Data will most certainly become mission-critical at some point and the cost of hardening it against outages once it has achieved even moderate scale can become too burdensome for most organizations to bear.
Ultimately, disaster recovery is a matter of perspective. What data is more valuable than other data? What is the cost of data and service loss? What level of risk are you willing to accept for any given enterprise function? No one has the ability to create and maintain a fully duplicated data ecosystem that can be switched on at a moment’s notice, especially with the volume that organizations are routinely encountering.
In that regard, DR is not just a matter of building the proper infrastructure but undergoing a rigorous self-assessment to determine who you are as an organization and where the value of your data truly lies.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.