You can download helpful guidelines on when and how to schedule your backups in our Knowledge Network.
Every aspect of the data center environment can stand a little improvement. But if your backup capabilities are like most, they are in dire need of an upgrade. These 10 tips can help you get started.
Our Arthur Cole, who compiled this list, covers data backup issues in his blog here at IT Business Edge. If you are looking for expert advice on how to improve your data backup strategy, you'll want to check out these posts from Art and our other IT Business Egde experts.
In Pursuit of Faster Backup The Changing Role of Tape in the Enterprise
MidMarket IT Getting Pulled in Multiple Directions
From CTO Edge: In the Event of a Data Loss
Click through for 10 great tactics for improving your data backup strategies.
Remove all redundant data and send only one copy to long-term storage. Sounds easy but it’s not. Adequate mapping is crucial so applications can find it again if necessary. Source-based dedupe is more expensive than target-based, but it provides faster transfer speeds by reducing network data loads.
Reformat existing data with more efficient coding. The newest systems can cut data loads nearly in half without any appreciable degradation. Bonus points go to systems that provide real-time compression without interfering with other processes like cloning, replication and dedupe.
Also known as multi-node backup, this architecture replaces the central backup server with multiple storage nodes. Not only does it provide for greater scalability, but you improve redundancy and gain higher resource utilization.
Most virtual architectures let you retain your existing backup systems but apply them not just to data but to entire operating instances, including system configuration, applications and related toolsets. These can be restored much more quickly and with less disruption than in traditional environments because the reprovisioning process is accelerated.
Tape is still the cheapest storage medium, but it is painfully slow. Since many legacy backup systems are optimized for tape, the VTL is an easy way to speed things up. It’s a disk drive array that appears to the network as a tape library. Since disks support a wide variety of transfer speeds, both backup and restore rates are greatly improved.
Like the overall cloud paradigm, you get infinite scalability but you only pay for what you use. Data is stored offsite, so hardware and software issues are someone else’s problem, and yet everything is online so recovery can be quick, provided your wide area throughput is adequate.
This is actually a trade-off. Differential backup offers a full system backup at regular intervals, so naturally it takes longer and requires more capacity. Incremental backup only backs up files that have changed since the last full backup, so it’s quicker but has a longer recovery time since it has to restore data from multiple incrementals.
The latest data management systems contain a wide variety of monitoring and metrics designed to delegate various data types to the correct tier. A central concept is Hierarchical Storage Management, which looks at the age, type and access frequency of data to determine whether it goes to near-line, mid-tier or long-term storage.
Faster throughput is a boon to just about every enterprise function, and backup is no exception. High-bandwidth links — 10 GbE at least — not only speeds up the backup process, freeing techs to pursue other tasks, but it greatly enhances Points of Recovery and other restoration targets. If you’ve ever gone down unexpectedly, there’s no such thing as getting back online too quickly.
Warehouse pros bristle at being lumped together with backup, but the fact is warehousing can vastly improve the value of simple backup by turning all that data into usable information. Advanced analytics, data consolidation and integrated reporting can provide insight into organizational structures and activities that can be used to streamline operations, enhance productivity or identify new revenue streams.