It seems the more the enterprise becomes steeped in cloud computing, the more we hear of the end of local infrastructure in favor of utility-style “mega-data centers.” This would constitute a very dramatic change to a long-standing industry that, despite its ups and downs, has functioned primarily as an owned-and-operated resource for many decades.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=iSo naturally, this begs the question: Is this real? And if so, how should the enterprise prepare for the migration?
Earlier this week, I highlighted a recent post from Wikibon CTO David Floyer touting the need for software-defined infrastructure in the development of these mega centers. Floyer’s contention is that “megaDs” are not merely an option for the enterprise, but the inevitable future, in that they will take over virtually all processing, storage and other data functions across the entire data ecosystem. The key driver, of course, is cost, which can be distributed across multiple users to provide a much lower TCO than traditional on-premise infrastructure. At the same time, high-speed networking, 100 Gbps or more, has dramatically reduced latency of distributed operations and is now available at a fraction of the cost of only a few years ago.
This should serve as nothing less than a call to action for CIOs around the globe, says tech journalist Bert Latamore. If Floyer is right, virtually every enterprise in existence today will convert to a full third-party IaaS footing within 10 years – a gargantuan migration task. For those who think they can put off the inevitable, consider that many business units are already moving to their own SaaS-based operations, which would leave the enterprise with little or no cohesiveness in their overall data environment. The choice, then, is clear: a single, integrated infrastructure or multiple, uncoordinated ones.
In a way, this is in keeping with history, says Seagate’s Albert “Rocky” Pimentel. Real estate costs have long been a primary business challenge, and the type and location of data facilities plays a large role in operational expenses and overall data flexibility. In this vein, it is reasonable to expect this new data center industry to model itself on traditional utilities, with large centralized facilities supplemented by smaller, regional operations and perhaps even local, containerized plants for high-speed functions. Think of it as the data equivalent to the large cable companies, which in fact are already aggressively pursuing B2B communications over their established broadband networks.
It is also telling that some traditional hardware vendors are starting to shift their portfolios to meet the megaD trend. Storage provider OCZ, for example, recently released the Intrepid 3000 Series solid-state SATA drive, which scales up to 800 GB and provides upwards of 520/470 MBps sequential read/write performance and 90,000/40,000 random read/write IOPS. With end-to-end data path and in-flight protection, plus internal RAID redundancy and 256-bit AES encryption, the drive is tailored for secure, large-scale environments.
A lot can happen in 10 years, so it is certainly feasible that the entire data industry could change over to utility-style service in that time. However, institutional resistance is a hard thing to overcome, and every time the headlines start screaming about another outage by Amazon, Dropbox or one of the other major providers, the enterprise executive suite is hit with another round of fear, doubt and uncertainty (FUD).
On the other hand, pay-as-you-go infrastructure is a great way for hungry start-ups to get in the game, providing highly nimble competition for old-guard businesses struggling with legacy data centers.