More

    Turning Analytics on the Data Center

    Slide Show

    2016 Data Analytics Forecast: Top 5 Trends to Watch

    The art of analytics developed to the point that it can predict highly complex systems and structures. So it was inevitable that the enterprise would turn it toward their own data infrastructure. After all, if the name of the game is to foster nimble, dynamic and responsive IT to meet the challenges of the emerging data economy, you can’t stay mired in the same old manual processes of today.

    The question, though, is to what degree analytics should play in the overall automation stack. On some levels, it will be handy for the system to analyze itself and make corrections as needed, with little or no human oversight. But you probably don’t want to push this model too far, lest the system start making unwise choices based on incomplete data or faulty assumptions. Remember, analytics are only as good as the information you give and the questions you ask.

    The dream of many data center management pros is to pull metadata from the physical plant and incorporate it into application-level integration, says Tech Republic’s Keith Townsend. There are different ways of doing this, of course. Companies like CloudPhysics offer SaaS-based analysis for key vendor solutions like vSphere that enhance tools like the Distributed Resource Scheduler for use in multi-cluster environments. Intel, meanwhile, is pursuing a more open approach with its Snap platform, which uses telemetry data to inform cluster controllers or even applications directly. In both cases, however, the goal is to streamline problem identification and resolution and boost overall data efficiency.

    Another solution is the SIOS IQ platform, which brings Big Data analytics to a wide range of data sets across local and third-party frameworks to hopefully spot hidden patterns within disparate infrastructure. The company says it is able to hone in on the root causes of performance issues and create a better balance between VM workloads and underlying resources to improve utilization and efficiency across the data environment. The platform was recently given a Data Center Excellence Award by infoTECH Spotlight.

    When it comes to complex data infrastructure management, it’s hard to top Bloomberg’s network in and around New York City, says Next Platform’s Timothy Prickett Morgan. The company has literally thousands of client terminals across the region supported by three major data centers, all of which must be in top form to meet the service requirements of Wall Street. To that end, the company utilizes the new BVault platform on the Mesosphere Data Center Operating System (DCOS) for service discovery and other functions. As well, the company has added a number of home-grown tools for tasks like log aggregation and the gathering of application performance statistics, all of which is fed into a Kafka engine that, in turn, fuels monitoring and alert systems. It’s an interesting read for those who like to delve into the guts of infrastructure management.

    With all the shiny new analytics tools hitting the channel, however, it’s important to not lose sight of the real purpose: turning data into knowledge. As PernixData’s Jeff Aaron noted recently, systems that provide simple visibility but no correlation or cohesiveness will result in reactive decision-making and lost opportunities.

    To be truly effective, an analytics platform must not only collect the right data at the right time but must also hold the right intelligence to achieve quick and meaningful conclusions. To that end, data collection must focus largely on the hypervisor and the analytics should provide descriptive, predictive and prescriptive results; that is, it should not only tell you how things are, but how they will be soon and what changes need to take place to achieve desired outcomes.

    An increasingly analytics-based infrastructure management stack is all but inevitable in the enterprise. In fact, for most organizations, it will consume only a fraction of the resources that will be devoted to analyzing market conditions, product development, and a host of other factors.

    The key, though, is to remember that while that stack may seem intelligent, it is really only reacting and responding to its own programming, and therefore should only be used to augment human oversight, not replace it.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles