The term data deduplication increasingly refers to the technique of data reduction by breaking streams of data down into very granular components, such as blocks or bytes, and then storing only the first instance of the item on the destination media, and then adding all other occurrences to an index. Because it works at a more granular level than single instance storage, the resulting savings in space are much higher, thus delivering more cost effective solutions. The savings in space translate directly to reduced acquisition, operation, and management costs.
Data deduplication technologies are deployed in many forms and many places within the backup and recovery infrastructure. It has evolved from being delivered within specially designed disk appliances offering post processing deduplication to being a distributed technology found as an integrated part of backup and recovery software. According to CA Technologies, along the way solution suppliers have identified the good and bad points of each evolution and developed what today are high performance efficient technologies.
This slideshow looks at data deduplication and five areas, identified by CA Technologies, that you should consider carefully when approaching a data deduplication project.
Lurking inside many corporate data centers across America are legacy storage devices just waiting to expire during a critical task. ... More >>
According to Gartner, the solid-state drive (SSD) market is expected to grow from some $390 million in 2012 to more than $4 billion in 2016. ... More >>
Here are five best practices to consider when selecting drives for your storage deployment to ensure the best price and performance for your needs. ... More >>