Automation of the data environment is quickly becoming a necessity for the enterprise, given the dramatic surge in volumes currently under way. But since automation in the local data center is difficult enough, how is the enterprise to cope with data architectures that span third-party cloud infrastructure as well?
Automation is widely seen as the primary way to drive real value out of virtual, cloud-based and software-defined infrastructure. According to a recent study by automation specialist PMG, nearly all IT professionals recognize the value that business process automation brings to key objectives like improving customer experience, increasing productivity and developing new products and markets. What’s more, clear majorities recognize that automation will radically alter the way virtual and cloud environments are managed and will be a key driver in reducing overall IT costs, particularly as Big Data and data integration efforts unfold.
The trick, of course, is building an effective automation stack that can cover disparate, distributed data ecosystems. But while the details may differ, many of the cues can be taken from industrial and manufacturing automation processes, says Inductive Automation’s Travis Cox. These include identifying the type and number of Programmable Logic Control (PLC) points you’ll need, how to build and manage automation tags, and how to treat historical data. A key element, of course, is setting realistic expectations regarding data collection and the capabilities of commercially available automation systems. It is best to engineer automation around what is possible, not what is desired.
For those planning on the software-defined data center (SDDC), automation is likely to be the central component of a well-run environment, says Symantec’s Drew Meyer. In fact, you’re probably going to encounter trouble without it once unchecked, unmanaged virtual resources start multiplying out of control. Plus there is the fact that the underpinnings of software-defined architectures will most likely be in a constant state of flux, so broad visibility and lightning-quick reconfiguration of end-to-end resource relationships will be the only thing to prevent the SDDC from collapsing in on itself.
That being said, there is still a chance that automation can do more harm than good, particularly if it is not engineered properly, says CloudWedge’s Natalie Lehrer. Automation is best when it comes to predictable, repetitive tasks, but even then it can be difficult to gauge the impact that some of these tasks will have on seemingly unrelated systems and processes. Automated disk cleanup, for example, can be a significant time-saver, but it can also hamper future operations if key data sets are deleted for the wrong reasons. Before rushing into automation, then, make sure you conduct a thorough review of the processes to be automated and their potential impact on the data environment.
Like virtually everything else in the data center, automation is neither good nor bad. It could produce a well-oiled data machine, or it could muck things up big time. It’s all in the implementation.
The temptation to automate everything all at once will be strong, but cooler heads should know enough at this point to take it slow.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.