More

    Data Center Conversion: Making the Most of What You Have

    Slide Show

    The Rise of Integrated IT Infrastructure Systems: Top Enterprise Use Cases

    The enterprise is very eager to move applications to the cloud, implement Big Data and the IoT and, in general, engage in all of the other advanced technologies that are driving digital transformation.

    At the same time, however, the enterprise has quite a bit of legacy data infrastructure to support, and it would be a waste to simply scrap this investment just because something new has come along.

    This is why conversion of existing facilities has become such a hot topic of late. On the one hand, today’s applications running on today’s infrastructure will still support a good portion of the enterprise workload going forward, and on the other, there are myriad ways in which these resources can be made more efficient and more effective within the broader scope of cloud and converged infrastructure development.

    Intel’s Diane Bryant, for instance, told the recent Intel Developers Conference that the company is already looking forward to the day when the accepted standard of computing shifts from the server to the rack. To that end, the company is moving forward with its Rack Scale architecture that leverages the SNAP open telemetry framework to provide for much more granular application deployment on available resources. The goal is to implement Rack Scale as a reference architecture that can support emerging and traditional applications in ways that produce optimal performance at minimal cost. Bryant added that Intel is willing to take a go-slow approach with this change to give enterprises a chance to maximize the flexibility of application and resource configuration without implementing too much uniformity on the hardware side.

    Existing data centers also have a lot of leeway when experimenting with aisle configuration, according to Erich Hamilton, director of engineering for rack designer DAMAC. As data centers become more customized, many operators are turning toward structure-based aisle containment as a means to increase density without expanding the overall footprint. This approach allows cabinets to be stacked higher, similar to the warehouse-style data centers that populate hyperscale infrastructure, while still enabling a flexible conversion timeframe that suits the enterprise and its data requirements. At the same time, this approach allows the enterprise to experiment with a range of energy-saving options because the racks themselves are still open and capable of alteration.

    One thing that has to change is the operational side of data infrastructure management. As the federal Commission on IT Cost, Opportunity, Strategy and Transparency (IT COST) recently reported, more than half of the IT spend by government agencies goes toward operations and management while a paltry 23 percent is spent on development, modernization and enhancement (DME). This falls in line with many commercial IT budgets, although it is fair to say that most enterprises are further along on the upgrade path to more scalable, efficient infrastructure. Again, standardization is recommended as one of the key elements in federal data center modernization, but this should not eliminate the capacity for IT to craft non-standard solutions for key applications.

    The challenge going forward, according to an Intel-sponsored post on VentureBeat recently, is to address the deficiencies of core assets like compute, storage, networking and operating systems so that the benefits can be distributed across the entire data environment, not just select silos. An all-at-once approach is expensive, but it has the advantages of producing dramatic changes to a wide set of users and it can usually be accomplished in a cohesive, integrated fashion. And if handled properly, this can actually be cheaper than a drawn-out conversion strategy, producing a better ROI over the lifetime of the system.

    The past is fixed, but the future is not. All upgrade plans carry a certain amount of risk that either capabilities will not be what you expect or requirements will shift in a new direction. A highly flexible architecture is the best way to cover all the bases when it comes to emerging technologies and business models, but there probably isn’t a small-dunk, fully optimized data center in your future unless you are very lucky.

    By targeting your upgrades to suit short- and medium-term data and user requirements, however, you will likely be in a better position to shift your digital infrastructure should the long term throw you a curve.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Save

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles