The Changing Data Center Climate

    Slide Show

    Software-Defined Storage: Driving a New Era of the Cloud

    Ask a roomful of IT experts what the future holds for the data center and you’re likely to get a roomful of different opinions. This is doubly true during periods of revolutionary change like we are seeing now.

    With outlooks ranging from all-cloud, all-software constructs to massive hyperscale infrastructure tailored toward specific web-facing or Big Data workloads, it seems that the enterprise has a range of options when it comes to building next-generation infrastructure.

    Even during times of heady change, however, it is still useful to anticipate the future by analyzing the past. TechNavio, for example, has noticed that rack units have nearly doubled in size over the past decade, leading the firm to conclude that future data centers will feature higher ceilings and taller equipment racks. A key driver in this is the rising cost of property, which is causing designers to build up rather than out. But it also has to do with the need for increased densities and the prevalence of wireless connectivity, which reduces the need for bulky cables.

    Networking is under more pressure than ever to accommodate increasingly complex and dynamic data environments, so it’s no surprise that it is facing significant changes as well. MIT recently unveiled a concept it calls the “no-wait data center” that utilizes a centralized architecture and advanced management to reduce request latency nearly 100 percent to an almost imperceptible .23 microseconds. The Fastpass model does away with decentralized node-based networking automation in favor of an “arbiter” that handles all routing. Unlike a standard router or network controller, the arbiter can accommodate up to 2,000 single Gb connections overseeing upwards of 2.2 Tbps. Researchers calculate that the system could oversee networks of 1,000 switches or more and vastly reduce networking costs for complex applications like Big Data and cloud computing.

    Change is also coming to the ways in which data infrastructure is designed and optimized. According to Chris Crosby, CEO of Compass Datacenters, software applications that focus on systems calibration and capacity planning are already altering how we think about data infrastructure and its relationship to the business process. Whether it’s continuous modeling, computational fluid dynamics (CFD) or simple benchmarking, enterprise executives have the unprecedented ability to not only gauge how their resources are meeting today’s data needs, but how they need to change in order to meet tomorrow’s.

    And even though modularity and white-box infrastructure seem to be taking hold, at least on the hyperscale level, it is still interesting to note the level of customization that is taking place, says Belden Solutions’ Michael Salvador. Market research indicates that upwards of 75 percent of ODM sales are now “skinless” servers going to Facebook, Google and other massive buyers, and that is likely to trickle down to the broader enterprise market through efforts like the Open Compute Project. So it might not be long before the standard practice at many organizations is to provision their own raw hardware and then customize it at will using the latest virtual and software-defined architectures.

    It’s been said that the only constant is change. That certainly applies to the data center. The trick, as always, is to anticipate user needs and burgeoning data architectures so that the hardware decisions do not impede the kind of flexibility needed to support agile data operations.

    Unfortunately, there’s no template for that, but there is at least the growing recognition that infrastructure choices should no longer be driven strictly by current problems of today, but by the opportunities that lie ahead.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Latest Articles