In the entire realm of IT, the one task that gets the least amount of attention is backup and recovery. But when there is a crisis, the one thing that everybody in the organization suddenly focuses on is backup and recovery.
Unfortunately, not enough attention is being paid to backup and recovery at a critical time. There are fundamental changes at work in enterprise IT that are putting enormous pressure on the backup and recovery process.
The first of those forces is the sheer amount of data that needs to be managed. In most organizations not only is the amount of data doubling and sometimes tripling each year, the types of data are multiplying as well. Instead of structured data typically associated with a database, any number of regulations now require the backup and recovery of unstructured data. And just to make matters more interesting, multimedia files that have completely different sets of storage requirements and attributes are being thrown into the mix.
The second scourge of backup and recovery is virtualization. With each physical server now supporting anywhere from four to 10 virtual servers, that's anywhere from three to nine additional servers that need to get backed up.
Alas, there's only 24 hours in a day. When you combine the growth in the amount of data and virtual servers with the fact that IT organizations can't take servers offline whenever they feel like in order to back them up, you start to see that it's only a matter of time before something very bad happens.
Wm Wrigley, Jr. Co., a unit of Mars Inc., for example, found most of these issues out the hard way. After a couple of crashes where data could not be recovered, the company turned to Hewlett-Packard's Data Protector software running on a disk-based backup system, which allows them now to backup mission-critical SAP applications every 15 minutes.
Most organizations probably don't need to be as extreme about backup as Wrigley. But there are some ounces of prevention that most IT organizations should be implementing today to prevent that recovery crisis tomorrow.
The first thing on the agenda should be data deduplication software. There are a ton of options available and lot's of debate over how best to do this. If you're performance sensitive, make sure you get "post-processing" data deduplication software so the process of running this software does not slow down your systems. If you're not performance sensitive, it really doesn't matter how you do data deduplication. What matters is that you reduce the amount of data, especially redundant Microsoft PowerPoint presentations, that need to be backed up. You can also opt to get this technology bundled with your backup and recovery hardware, or opt for a software-only approach such as CommVault that allows you to deploy this type of software on almost any system you like. No matter how you do it, not only can you reduce the sheer amount of data that needs to managed, this type of software goes a long way to reducing the total cost of managing data by anywhere from 40 percent to 50 percent. If effect, it pays for itself.
The second thing that needs to get done is start segregating your data. Not all data is of equal value. Buying disk-based backup systems is more expensive than tape systems. You really only need to use disk-based systems for critical data and most recent data. The simple fact is that 90 percent of the time, end users are looking to recover a file that is 24 to 48 hours old. Except in a few case, there's no real need to keep files that are six months old on disk.
The third thing to remember is to make sure that you have a backup and recovery solution in place that is virtual machine friendly. You don't need backup agent software for every instance of a virtual machine on a server. The latest generation of backup and recovery products can support multiple virtual machines on a single physical server. Don't let backup vendors tell you that you need to pay additional licensing fees for each virtual server. If they do, it's less expensive to get a new backup and recovery system.
Fourth and final, the laws of physics are immutable. If the files sizes are growing, the longer it will take to recover them using existing network and storage I/O technology. In most organizations, the speed of the recovery matters more than the amount of time it took to back up the file, so it's probably a good idea to make sure that the bandwidth allocated to backup and recovery is sufficient to meet those requirements.
If there was a patron saint for backup and recovery, it would probably be Rodney Dangerfield, given the amount of respect the task gets (although the Roman Catholic Church has apparently assigned responsibility for finding lost things to Saint Anthony).
Regardless of whom you might seek spiritual assistance from in a crisis, there are no atheists when it comes time to recover a file. God helps those most who help themselves. And those that re-engineer their backup and recovery processes today are going to think themselves most blessed when the next inevitable crisis hits.