IT organizations today are coping with what amounts to the most difficult set of management challenges ever in terms of managing data storage.
On the one hand, the amount of data that needs to be managed is growing exponentially. At the same time, management wants reductions in the IT budget being allocated to storage in terms of not just lower costs per terabyte, but also the actual total amount of money being spent on storage.
After benefiting from massive savings on server consolidation thanks to virtualization, senior IT executives want to see the same effect on their storage capacity, which can take up as much as 20 percent to 50 percent of the IT hardware budget. But while IT managers would like to achieve this by increasing the utilization rate of each storage array, in no way can the I/O performance of any given application be compromised by storage virtualization.
Peter Fuller, vice president of business development for Scale Computing, a provider of storage clustering software, says these issues are driving many IT organizations to completely rethink their approach to storage management. In addition to aggressively tiering data to make sure that they get the most out of expensive storage systems, they are also pursuing approaches that provide the highest level of flexibility.
That means, among other things cited by Fuller, finding storage architectures that don't lock customers into a particular storage protocol, being able to add storage capacity in granular 1TB units and, most importantly, reducing the amount of time it takes to actually manage large amounts of storage.
This level of flexibility is essential, said Fuller, because with storage pricing dropping rapidly, customers can't afford to be locked into long-term storage contracts based on a fixed price.
Ultimately, Fuller says the next generation of storage management will be defined by cost, control and convenience, which are three metrics that most existing storage architectures fail to measure up against.
Click through for advice, courtesy of Scale Computing.
There’s a lot more to storage costs than just the cost per terabyte.
Unified storage solutions should not be hampered by protocols.
On demand scalability will be required to keep pace with data growth.
IT organization can no longer afford to throw people at the storage problem.
Data is growing at a faster rate than IT organizations can keep up with.
As storage pricing becomes more dynamic, no one wants to be locked into long-term contracts.
Many IT organizations are trying to eliminate dedicated file servers?
IT organizations should have to pay for redundant hardware that sits idle most of the time.
IT organizations shouldn’t be required to over provision storage.