Systems Management in a Distributed World

Arthur Cole

Arthur Cole spoke with Richard Threlkeld, technical product manager, 1E.

As data environments become more distributed, the challenge of systems management grows exponentially. Where once an enterprise need concern itself with a finite set of hardware and software components, today's organizations must coordinate activity across a range of platforms, architectures and infrastructures on public, private and hybrid clouds.

Companies like 1E say this state of affairs presents many challenges, but there are opportunities as well. As the company's Richard Threlkeld puts it, with proper systems management , "software license costs are slashed, IT staff costs eliminated and network content headaches are gone."

Cole: IT systems management is already a headache within the data center. What are some of the challenges now that data environments are more dynamic and more dispersed?

Threlkeld: Traditionally systems management in the data center has had two main problems: lack of heterogeneous platform support and no capabilities to segregate client systems from server-class platforms within a systems management platform. The first problem has been addressed to a degree by major players such as Microsoft, HP and IBM with more cross-platform support, and when combined with third-party add-ons, it is possible to get complete functional coverage on the majority of systems in your data center.

Server and client segregation within a systems management platform has also come a long way. In years past, if an organization wished to use System Center Configuration Manager or its predecessor SMS to manage both the desktop estate and data center, an organization might set up two separate infrastructures for settings management as well as software distribution targeting and operating system deployment. This would not only allow the platforms to have different configurations, but also avoid client distributions from accidentally hitting servers in the data center or vice versa.

The newer versions of ConfigMgr allow for separation of client settings and configuration management based on platform profiles and have intelligence built into the applications being deployed to not only allow for clear lines between the client estate and data center, but also help prevent human error causing faulty targeting during software deployments and problems in the data center.

These newer platforms dovetail nicely as client systems become more mobile and data centers become spread out geographically. Rules evaluation based on the aforementioned built-in application intelligence allows for less computing that needs to take place in the data center as the systems management platforms allow these processes to take place on the client side. Essentially, it is easier to calculate what one system — server or client — needs to do on that system quickly rather than have a handful of servers perform the same calculation for every system and platform in the enterprise. When you combine this with intelligent bandwidth throttling and peer distribution mechanisms, the need to have more servers and large network pipes is dramatically reduced. This means server provisioning at different data centers can be more targeted and business-focused. Settings management can also be targeted at these different locations to guarantee that all the newly provisioned systems get consistent profiling and IT management with less administration and ultimately less headcount of IT staff.

Cole: Does the cloud add a new layer of complexity to this picture? How are enterprises expected to manage resources that are not technically their own?


Threlkeld: Adding anything from a technology standpoint to your environment will always add some level of complexity. The question is, will you be committed enough to add a cloud layer to your IT stack and remove other existing processes and technology so that the net effect is an overall reduction in technology? This is a similar question to purchasing a new piece of software to replace another. Will you remove all remnants of that older software or will there be some stragglers and edge cases left over that still need management? This always needs to be considered when introducing any new technology in an organization.

When done properly, however, adding a cloud layer to many of your services will pay dividends to your organization. First, it is important to remember what a cloud effectively is: a private data center over a different type of network link, the Internet. Today you have your own private data centers for many services, which have a related amount of overhead for the continued maintenance, servicing and replacement of those servers and software. If you take these resources and keep the administrative costs but allow the cloud provider to handle the maintenance and service, then you reduce your overall cost. This means your organization isn’t expected to manage these systems any longer as it is the provider’s responsibility. It also means that you effectively will be getting a higher level of service due to the management skill of a specific vertical that exists with the cloud provider of your choice. When you keep services in house, your staff will have knowledge based on its experiences and can handle new situations based on those experiences. However, the cloud provider will have a larger scale of systems and, therefore, a larger amount of experience and expertise so that if issues do arise on your managed systems they will be handled faster and likely better.

Of course, there are still concerns with pushing certain resources to the cloud — chiefly, security of the systems and the data. If this is handled correctly, however, there are more benefits to the cloud outside of the services we’ve just discussed and potentially more cost reduction opportunities. For instance, in the systems management domain, you can push your software content distribution processes out of your data centers and into the cloud in order to get around data replication issues that might exist with your internal links today. Many organizations are still hampered by the physics of bandwidth challenges over their WAN links from data centers in different locations. This is especially true for global organizations. With cloud services, you can cut out the middle man and distribute over different Internet or satellite links securely and eliminate extra charges from network providers.

Cole: How does 1E manage tasks like OS migration and content delivery now that we've moved past simple client-server relationships?

Threlkeld: 1E has taken a strategy that is a bit different from the traditional players. Most of the time, large software vendors or services organizations will automate 90 percent of the tasks for migrations or content distribution on 100 percent of the systems. This still means you have to do desk-side visits or help desk calls to handle the last 20 percent. It often also means that users will need to ship in their PC to be migrated to new client systems and extra IT staff is needed to finish off tasks in the case of the data center. Instead, 1E automates 100 percent of the tasks on 90 percent of the systems. There will still always be those edge cases — exotic hardware, executive systems — but the majority of client or server systems do not need human intervention.

1E achieves this by having products at different levels of the infrastructure and automation stack work to optimize tasks. We’ve integrated these products to speak with each other. For instance, the Nomad 2012 product handles the infrastructure component at the lowest level of OS migration and content delivery by throttling bandwidth at the client and then distributing content to peer systems or servers with automatic failover mechanisms to eliminate administrative overhead. It also provides components to automate bare metal client or server builds without network changes in addition to dynamic data protection, backup and restore.

This process is driven by a workflow engine called Shopping, which acts as an app-store for the corporate enterprise, that either end users or IT staff can leverage. With the Shopping workflow interface, tasks that IT staff would normally need to complete — such as scheduling an OS migration or installation, gathering user applications or data, etc. — are completely eliminated by making the processes driven by the user community. This not only reduces administrative overhead through automation, but also leads to higher user satisfaction rates as they are in charge of what is taking place on their system. It also means that the IT staff has more powerful tools to do more dynamic targeting and system replication builds of client or server systems by not having to worry about how content is delivered or if the experience will be consistent.

Finally, 1E ties in a product called AppClarity, which gathers application usage information from the enterprise and calculates usage patterns across individual or groups of systems. This information is integrated into an intelligence engine that Shopping reads from so that when the OS migration and content distribution process takes place, only those applications and OS content that need to be installed and distributed are selected. When these tools are all working together in an organization, software license costs are slashed, IT staff costs eliminated and network content headaches are gone.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 
Resource centers

Business Intelligence

Business performance information for strategic and operational decision-making

SOA

SOA uses interoperable services grouped around business processes to ease data integration

Data Warehousing

Data warehousing helps companies make sense of their operational data


Thanks for your registration, follow us on our social networks to keep up-to-date