Storage Automation in an Increasingly Complex Environment

Arthur Cole

You won't find many IT techs who complain that their organization has deployed too much virtualization. But the fact remains that increased VM density in the server farm results in increased pressure on resources elsewhere, particularly storage.

And while the cloud has made the provisioning of storage both easier and cheaper, it nonetheless contributes to increasingly large capacities that must be brought under management in order to provide effective support for the new dynamic data environment.

That's where automation comes in. Even at this early stage of cloud development, storage resources are already becoming too complex to be managed by mere humans. With data environments and user requirements in a constant state of flux, only automated processes will be able to match resource availability with demand, providing an effective barrier against both over-provisioning and disruption of service.

According to iWave CEO Brent Rhymes, data under management is expected to grow 50-fold by the end of the decade, even as IT staffing levels will grow only one-and-a-half times. And with self-service provisioning of virtual environments, a fully automated stack is the only way to prevent storage managers from toiling away at mundane, repetitive tasks all day. It also helps that storage automation cuts the provisioning process from weeks to mere hours, configuring not only storage, but fabric, host and application pathways as well. At the same time, a properly designed system will be able to constantly monitor storage environments and implement a wide range of self-correcting measures to ensure optimal resource/load coordination.

Both automation and the cloud are emerging as two key weapons in the fight against storage complexity, says Storage Switzerland's George Crump. A key component, however, is the ability to differentiate between various data types — down to the sub-file level, at least — to determine what level of storage is appropriate. Sharepoint itself, for example, requires the high-speed performance of a solid-state tier, while many of its documents might do perfectly well on low-cost secondary storage or in the cloud. With proper management, in fact, storage could very well transform itself from the cost side of the balance sheet to the asset side.

Also crucial is the simplicity of the management software itself. As data environments become more complex, the user interface must present a high degree of orchestration across multiple platforms and data architectures without forcing managers to navigate through multiple layers of software. Unfortunately, according to Computerworld's Robert L. Sheier, no one has yet developed a "single pane of glass" system that can cross multivendor environments in the data center and on the cloud. At best, we have a range of service catalogs directed at various applications and APIs — call them "aggregation platforms" that interface with management systems from CA or Amazon.

In the meantime, it looks like the steady stream of vendor-centric automation will continue unabated. The latest entrant is Dell's new EqualLogic storage blades, which feature the company's Array SAN management system and a set of Host Integration Tools for automating Microsoft, VMware and Linux environments across distributed architectures. These, along with a simplified power and networking architecture, should cut the provisioning time for a single blade down to about 20 minutes.

The basic conundrum with automation is that to make it easier on the user, the development and coding has to be highly complex. That means that while we can all agree on the kind of all-encompassing, easy-as-pie automation system that is needed for advanced virtual and cloud environments, no one has figured out how to do it just yet.

It would be nice to close with some reassurance that we'll get there someday, but with data architectures themselves advancing at such a rapid pace, it's all the management stack can do just to keep up.

Perhaps we'll arrive at a certain state of equilibrium once the transition to the cloud is complete — provided, of course, that an even more disruptive technology isn't waiting in the wings.

Add Comment      Leave a comment on this blog post
Oct 9, 2012 9:18 AM Kristin Kristin  says:
Interesting insight -- I was especially intrigued by your opening comment about the potential negative impacts of virtualization. Cameron Laird also blogged about this topic last month, saying "Virtualization is a kind of recognition that our operating systems fail to give the efficient system services we need. Virtualization certainly has performance costs and design hazards. When done right, though, virtualization provides great value..." ( Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.