More

    Data Center Automation: The Truth Behind the Sci-Fi

    Slide Show

    Five Ways to Address Your Data Management Issues

    Automating the data center is one of those things that evokes conflicting emotions in enterprise executives. After all, who wouldn’t want a virtually hands-free data ecosystem in which everybody’s needs are satisfied at a moment’s notice? Then again, no one, not even the people building the automation stacks, believes such functionality is realistic.

    But while it is true that automation is not likely to produce Star Trek-esque data service any time soon, the fact is that today’s platforms can and do improve data processes quite a bit, and implementation, particularly on virtual and abstract architectures, is not nearly as cumbersome as it was just a few years ago.

    At VMworld, one of the central developments to the VMware ecosystem was the EVO SDDC platform, the heir to the EVO: RAIL system designed to transform virtual infrastructure into the software-defined data center. EVO SDDC brings a high degree of automation into the VMware stack, enabling the enterprise to more closely integrate compute, storage and networking resources into a cohesive data environment. The system offers tools for multi-rack resource management, intelligent operations automation spanning physical, virtual and cloud infrastructure, plus hardware management services to abstract the differences between heterogeneous platforms, including power systems. The overall idea is to leverage software and open source constructs to enable a fully automated data ecosystem that can be configured and reconfigured completely in software.

    Even at this level of functionality, however, it is important to remember that automation must still accept the limitations of physical resources, says MTM Technologies’ Bill Kleyman. Theoretically, of course, resources of all types and in virtually limitless amounts are available in the cloud, but unless you want to subject your IT budget to the whims of that automation stack, you would do well to put some rules in place. These include resource provisioning guidelines based on current and expected user counts, as well as connectivity requirements to and from branch office and central data locations, all balanced against the needs of advanced computing initiatives such as data mobility and high-speed analytics. In all likelihood, IT’s responsibilities in maintaining a stable computing environment will increase even as automation takes over many of today’s mundane activities.

    It’s also true that automation does not have to be an all-or-nothing proposition. End-to-end automation is certainly desirable, but targeted approaches, such as HashiCorp’s Atlas dev/ops-facing platform, can produce substantial productivity gains as well. The system incorporates many of the tools found in the earlier Vagrant solution, such the Vagrant environment management module, the Packer automated artifact builder and the Terraform infrastructure provisioning kit. By combining them into a single, integrated platform, Atlas provides a fully automated application delivery pipeline that streamlines infrastructure and resource consumption, simplifies rollbacks, and enhances configuration sharing among development teams.

    Backup and recovery is also an area ripe for automation, according to DR specialist CloudVelox. In fact, it could very well be the missing ingredient to greater deployment of cloud-based recovery. The company released a recent survey that indicated that a majority of organizations believe their in-house DR environments are lacking, but many are still hesitant to trust the cloud, primarily due to security and reliability concerns. More than half, however, say that a solid automation component, such as CloudVelox’ Pilot Light DR platform, natch, would enable them to embrace the cloud more fully. A key element in the automated approach is migration. The enterprise must have a simple, effective means of getting data into and out of the distributed architectures or cloud-based recovery programs will remain on the sidelines.

    The danger in piecemealing your automation system is that it could produce the same type of silo-based environment that hampers workflows in the current data center. A clear-cut development and implementation strategy would alleviate this problem, but it would have to be couched within an overarching development framework, which even then might require a fair bit of in-house TLC to maintain peak optimization.

    One thing seems certain: The enterprise will not be able to chart a course through the automation waters on its own. And frankly, true expertise in this emerging field is relatively hard to come by.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles