Data is the lifeblood of the modern enterprise, and as with most complex organisms, loss of blood can lead to weakness and death.
So it is no wonder that data recovery has emerged as a top priority as the enterprise finds itself trusting third-party providers for the care and maintenance of their lifeblood to an ever greater degree.
According to Veeam Software, application and data downtime is costing the average enterprise about $2 million per year, with the vast majority of that cost attributed to the failure to recover data in a reasonable amount of time. This usually presents a double-edged sword for IT, though, as the pressure to improve recovery times is often accompanied by the reluctance of the front office to invest in adequate backup and recovery (B&R) infrastructure. This also affects permanent data loss, as many organizations maintain backup windows and restore points that fail to account for the massive accumulation of potentially critical data in a relatively short time.
The cloud has done a lot to relieve the burden, financial and otherwise, of wide-scale B&R. In fact, this is one of the primary drivers of IaaS, according to ResearchandMarkets, in that it provides a ready platform to not only integrate backed-up data into dynamic production environments, but to maintain a duplicate IT infrastructure should primary resources go dark. IaaS also puts these capabilities within reach of the small-to-midsize enterprise.
Even in the cloud, however, it can take time to locate and retrieve critical data, which can still cause significant damage to business models in the modern digital age where, literally, every second counts. This is why a recent patent issued to Nasuni Corp. is so intriguing. According to the company, the patent covers a technique that allows full terabytes of data to be restored in a matter of minutes. The function is part of the Nasuni Service, which utilizes full data snapshots every minute and then tracks files, folders and other data sets through sophisticated metadata that can pull the copy from its source with a single mouse-click. The system can also retrieve different versions of the same file as the data within it changes over time.
Technology is certainly a key asset in the data recovery process, but so is policy and data management, says Networks Unlimited’s Anton Jacobsz. Loading B&R resources with every last bit of data not only increases costs, it can make it difficult to search for and recover the information you really need, so it helps to have a clear idea of what to keep and what to toss. And organizations with numerous branch offices should work out a plan to consolidate their data, remove the duplicates and then integrate recovery operations across all endpoints. This helps to maintain a smooth flow of information and supports branch-to-branch recovery as well. Meanwhile, applications should be structured to run smoothly from secondary data centers or the cloud.
Major IT outages – whether natural, man-made or technology related – are coming at such a regular clip these days that few people need to be convinced of the need for robust B&R. But wanting something and having the ability to pay for it are two different things.
In the end, it comes down to simple calculus: Does the actual cost of increasing the recovery budget outweigh the potential cost of losing critical data and services for an extended period of time? That is a question that every enterprise has to answer for itself.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.