In an ideal DevOps environment, there is little or no distinction between the development and operations of applications and services. Most platforms, in fact, strive to blend these two functions into a cohesive process so that Dev and Ops can move in tandem, significantly enhancing the app lifecycle and providing a far less error-prone product to consumers.
Still, IT operations management remains a distinct entity in a DevOps world, and in fact will see some substantial changes as the focus moves beyond the simple oversight of physical or even virtual resources to encompass more of the application- and data-layer factors that are critical to successful outcomes.
This is uncharted territory for most IT shops, which, like their development counterparts, now have to learn new ways of managing things with an eye toward what it all means for the big picture. And this can lead to confusion when it comes to selecting the right tools and capabilities to support smoothly running workflows.
“It all depends on where we start engaging in the delivery of DevOps services,” said Jaime Palacios, vice president of digital and innovation for Softtek, a global provider of IT services and business process solutions. “If we start from a traditional operations point of view in which there is a request for a configured environment and requirements as to how certain operating systems and middleware are installed, we would start by customizing and integrating those functions into the tool chain.
“From there, we would look to application lifecycle management tools. This is necessary for developers to have a traceable means to understand how defects are reflected along the rest of the lifecycle process.”
This is particularly challenging for enterprises with substantial legacy infrastructure. At the same time they are attempting to convert to private cloud architectures and link these to public resources under a hybrid model, there is the added burden of layering a comprehensive management stack across divergent areas of the development and operations chains. And as most admins will tell you, even within the server, storage and virtual domains, there is usually a plethora of disjointed, disconnected systems that must be integrated into a cohesive operational model.
This can usually be addressed only through broad automation and changes to development and operational processes themselves. For instance, functions like version testing benefit greatly from an overarching automation stack because that can reduce its duration from several weeks to several days. At the same time, automation can streamline the hand-off between core business functions like transaction processing and contract management – many of which are steeped in technology that dates back 50 years or more.
It is also difficult to separate the technology of DevOps from the cultural changes it brings. In order to implement a truly effective change to agile, the enterprise must embrace a fully transformative process implemented across systems, infrastructure, job responsibilities, organizational charts and even business models themselves. Once an application has gone live and is subject to a continuous integration/continuous delivery (CI/CD) support structure, the concept of individuals or teams being responsible for their own piece of the workflow falls by the wayside. Specialty skills will remain, of course, but everyone will be responsible for all aspects of the product’s success, with updates and changes occurring in a much more rapid and iterative fashion.
So while Dev and Ops are integrated under a DevOps models, here are some of the leading tools for the Ops side:
PagerDuty leverages advanced intelligence and analytics to provide systems management, orchestration, incident response and other functions. The platform features an extensible API and offers more than 200 native integrations with third-party products. It also supports ITSM workflows and can be accessed via Android, IOS and smartwatch apps.
OverOps provides continuous reliability for the software supply chain, notifying managers when and why code has broken. The system analyzes code in staging and production environments to detect root cause for all errors without logging. It also uses JIRA, Slack and other services to automatically route tickets to the correct technician.
MemCached speeds up application delivery and performance by alleviating database loads. It uses an in-memory key-store value system to save strings, objects and other arbitrary data resulting from database and API calls, as well as page rendering and other functions.
Puppet Enterprise manages hybrid infrastructure across the entire DevOps lifecycle. It provides a common language that multiple teams can employ to oversee functions like version control, code review, testing automation and continuous integration. It also provides a single platform to enforce the desired state of infrastructure configuration and automatically remediate unexpected changes.
Chef Automate eliminates time-consuming tasks in scale-out server infrastructure like patching, configuration, updating and service maintenance. The system uses a simple DSL to enable code-based configuration without the need for traditional run books or shell scripts, all while providing a high degree of customization across hardware, data center and even end-to-end environment profiles.
BMC Release Lifecycle Management
BMC Release Lifecycle Management (RLM) offers end-to-end visibility across the product lifecycle to integrate apps, configurations and environments. It uses a web-based portal to eliminate resource contention and streamline processes, while also providing automated continuous integration and delivery through customized templates.
ElectricFlow provides adaptive release automation so that any app can be deployed into any environment at any scale. The system uses a combination of orchestration, release management, reporting and other functions to provide centralized control of all DevOps tools and processes.
XebiaLabs DevOps Platform unites tools under a single user interface to orchestrate the entire software delivery process. The system provides visibility into release pipelines, deployment automation, control enforcement and a range of other functions. It also provides an intuitive interface for non-technical users.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.