Data replication has long been a crucial aspect of backup and disaster recovery applications. After all, if you don’t replicate, you’re not likely to get your data back after primary systems go down.
But with the advent of software-defined architectures and dynamic, cloud-based data environments, simple replication is no longer enough for many organizations. The more dependent a business becomes on its data infrastructure, the greater the consequences of extended downtime, not to mention the permanent loss of data that may not have made it into backup within the last replication window.
This is why many enterprises are turning toward continuous replication. The steady flow of data to both primary and backup storage facilities all but ensures that data environments can be restored to their natural states following an outage. And if the backup is in the cloud, the benefits of continuous recovery can be shared among numerous emerging data functions, such as collaboration and social media.
This is why many leading cloud platform developers are building continuous replication into their core offerings. VMware’s Vivian Chan notes that perhaps a quarter of businesses affected by disaster fail to recover, and even firms that employ the standard 24-hour backup often find that the most recent data is the most critical, resulting in loss of revenue and unmet business agreements. To that end, the company has added the vSphere Replication engine to the vCloud Hybrid Service to shorten the recovery point objective (RPO) from days to mere minutes and bring the recovery time objective (RTO) down to a matter of hours.
As well, EMC has added continuous replication as a key component of its new Data Protection Suite. The VPLEX storage virtualization system and the RecoverPoint module are designed to work together to provide what the company calls a “MetroPoint” topology that enables continuous operation across dual data centers, plus remote replication to a third. With data protection extended across all three sites, EMC says it can maintain information integrity and data operations even if two of the sites go down. Even midsize organizations will be able to take advantage of the VPLEX platform, as the company has also released it as a virtual appliance.
Smaller developers are looking to leverage the demand for continuous replication, too. CodeFutures, for example, is touting a patent for its proprietary CR technology designed to ensure that database functionality and other key processing operations can be maintained even if main systems are offline. The dbShards/Replicate component of the broader dbShards platform provides improved reliability between DBMS instances that are spread across multiple data centers or clouds, and can also be used by admins to maintain continuous service if one or more databases is undergoing routine maintenance. The system resides in commodity hardware and can be used with multi-user OLTP databases, online services and SaaS applications.
Meanwhile, an Israeli startup called CloudEndure has released a beta version of its real-time replication system aimed at cloud-based applications. With the enterprise relying on third-party infrastructure to an increasing degree, recovery solutions need to focus not just on the loss of owned-and-operated infrastructure, but third-party resources as well. The CloudEndure platform claims to eliminate downtime by maintaining an entire application environment within multiple clouds. If one cloud should fail, CloudEndure promises a single-click restoration to bring the alternate environment online.
Continuous replication is a more complicated and costly endeavor than the scheduled processes of today, requiring substantial demand of network and other resources, even during peak data loads. Leading platforms today promise to carry out the function with as little disruption to normal operations as possible, but much of that will depend on underlying infrastructure rather than the replication platform itself.
As to its true value to the enterprise, that’s up to the CIO, who should carefully weigh the value of organizational data and the overall consequences should it be lost or unusable for an extended period of time.