Embracing the future is usually more of a process than an event. Once the initial FUD (fear, uncertainty, doubt) passes, there is often a complete 180 in which all the problems of today are expected to be swept away. Once actual development and deployment kick in, however, the real-world practicalities become evident, leading to the realization that new issues invariably arise to take the place of the old.
You can see this dynamic playing out across a variety of data center initiatives these days. From software-defined infrastructure to cloud computing and even plain-old virtualization, the bloom eventually comes off the rose, albeit usually after it is too late to turn back.
ClearSky Data’s Laz Vekiarides recently turned the microscope on software-defined storage (SDS) and found a number of things to be wary of, although certainly nothing that would outweigh the benefits. For one thing, there is no standard definition of SDS, which gives free rein to vendors to slap the label on all manner of solutions without necessarily providing all the functionality that users expect. As well, SDS is often price compatible with legacy storage infrastructure only when deployed on commodity white-box hardware, and even then only when purchased in quantities that exceed the needs of most enterprises.
The cloud, of course, has long been suspected of poor reliability, dicey security and indifferent customer service. Whether this reputation is earned or not, the fact remains that cloud data centers are just as susceptible to issues that plague local infrastructure, says the Register’s Dave Cartwright, which means that even after all that careful provisioning and data migration, operationally not much has changed. To be sure, most cloud providers are keen on delivering the best customer service they can, but when the system goes down, it’s your profitability on the line, and the value of service credits that you get on the backend is rarely equal in value to the business that was lost.
This is why many organizations are looking to duplicate cloud functionality on their own converged and hyperconverged infrastructure, but even here the drawbacks are significant, says ITWorld’s Danny Bradbury. For one thing, the technology is so new and the market is so fragmented that it will be difficult to implement a cohesive environment. And when it comes time to scale, all modules must scale equally over computer, server and networking, even if the capacity crunch is on one of these resources. Sure, you could add a dedicated storage or compute component, but that goes against the grain of implementing an integrated system in the first place.
And as I mentioned, even tried-and-true virtual infrastructure has some hidden drawbacks that are only recently coming to light. Security firm Kaspersky, for example, claims that it can be dramatically more expensive recovering from a breach to virtual infrastructure than physical resources. In a recent survey, the company found that recovery costs in the virtual world average about $800,000 for large enterprises, which is about double the physical cost. For small firms, the discrepancy is even more dramatic: about $60,000 for virtual systems compared to $26,000 on bare metal. Possible explanations for these numbers include the fact that more mission-critical applications are porting over to virtual infrastructure, therefore the lost data is more valuable. But there is also the fact that securing virtual environments is more complicated and expensive, causing many organizations to pursue half-measures or fail to fully appreciate the unique security needs of abstract architecture.
Despite the drawbacks to all of these emerging technology initiatives, it is fair to say that the upsides of improved efficiency, flexibility and higher levels of service to emerging productivity-enhancing applications still result in a net positive for the enterprise.
But just as irrational fear and negativity are detrimental to technology advancement, so too is irrational enthusiasm. The future data center will be better in many ways than the fragmented, physical infrastructure of today, but it will not be without difficulties.
A realistic outlook that encompasses both the challenges and opportunities of emerging technology is the only sure way to get it right in the end.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.