Private Clouds Depend on Dynamic Storage

Michael Vizard

There's a lot of talk these days about IT organizations achieving higher levels of maturity by delivering IT as a service via the adoption of private cloud computing architectures.

But before any of that really can happen, some fundamental advances need to be made in terms of how we manage and store data. Although we've been talking about concepts such as hierarchical storage management (HSM) for a decade or more, IT organizations in 2010 are going to need to start moving toward a much more dynamic implementations of HSM in order to enable clouds of computing resources.

Unfortunately, HSM has been more theory than good practices in most IT organizations. In the name of performance, the standard practice has been to pretty much give each application access to its own dedicated storage array. But as the cost of powering those devices has grown alongside the actual amount of power all these storage devices consume, IT organizations are looking for more dynamic approaches to HSM that will not only cut costs, but also provide a more efficient approach to managing data.

A big part of the answer to that equation is the incorporation of solid-state disk drives (SSD) into storage arrays. In 2010, the cost of these devices will drop far enough to make it affordable to put entire databases and applications in memory. For a lot of IT organizations, this will meet the performance requirements for their most mission-critical applications. The rest of the data that needs to be managed can then be distributed across more dynamic approaches to HSM based on the quality of service needed for each application. And just to make things more interesting, many of those applications will be dynamically moving about the enterprise thanks to recent advances in virtualization.

According to Pillar Data Systems CEO Mike Workman, the ability to provide this level of service on the fly is going to ultimately separate the men from the boys in the world of storage. A lot of the storage architectures in place today, he notes, were never designed to meet the demand for dynamic HSM that will be the hallmark of private cloud computing.

With the rise of virtualization and cloud computing in the enterprise, 2010 is going to prove to be one of the most challenging times for IT in recent memory. And most of those challenges are going to start and finish with happens to the way we design, implement and manage the next generation of storage architecture.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.