More

    Optimizing the Cloud-Based Workload

    Slide Show

    Multi-Cloud 101: 7 Things You Need to Know

    The enterprise has gained enough experience on the cloud by now to realize that lower costs and more flexible resource allocation are not the only elements in a workload-optimized infrastructure. In fact, without careful coordination, cloud environments can introduce all manner of bottlenecks that inhibit the smooth flow of data.

    But just as in the traditional data center, optimizing workloads in the cloud is no easy task, and as more applications and services become integrated to enable new user experiences, finding the sweet spot between cost and performance becomes even more of a challenge.

    According to Jeff Loeb, chief marketing officer for IT management firm Ipswitch, optimal performance across hybrid infrastructure is particularly difficult due to the plethora of systems and technologies involved. This is why a proper deployment won’t just evolve haphazardly but around four key management functions: infrastructure monitoring, automated log collection, flow analysis and regular network testing. Optimal performance, of course, requires more than just throughput and connectivity. Systems also need to be engineered with security, availability, compliance, and a host of other factors in mind.

    Increasingly, however, developers are coming to the realization that it isn’t necessarily infrastructure that produces performance degradation, but the applications themselves. Customized code in particular can be a real drag on performance, according to a recent survey from Evans Data. The firm queried more than 500 developers and found that less than half were writing code that was optimized for cloud infrastructure or took advantage of the parallel nature of cloud processing. Many, in fact, had to rely on their cloud service provider for guidance in boosting performance outside the data center.

    It’s always best to address performance issues before you deploy onto the cloud, of course, but all too often proper preparation of the distributed environment does not extend into run-time performance issues, says Atchison Frazer, CMO of performance analytics developer Xangati. This is why organizations should shore up their remedial processes and granular analysis of potential trouble spots as well as develop metrics for such crucial aspects as launch, logon, and load times, plus overall latency and operational factors like printing and screen updating. The cloud benefits the enterprise in a number of ways, but it will never get off the ground if it doesn’t produce a better user experience as well.

    This will be difficult to do without proper measurement, says Andrew Sullivan, of Internet performance company Dyn. The cloud, in fact, is doubly challenging than the data center for the simple reason that you don’t have full control of infrastructure in the cloud. Measuring page loads or rendering times on a web app hosted in house, for example, can reveal problems in the database, but those same metrics on the cloud could hint at anything from a connection problem to the cloud to a resource reconfiguration at a remote facility. Fortunately, while the cloud increases the number of variables to be measured, it also provides the resources needed to support the more complex management stack, including the advanced visualization and analytics needed to meet performance objectives in real- or near-real-time.

    It is also important to note that since the cloud provides such a dynamic data environment, performance measurement should be equally dynamic. As users come to expect instant gratification from their applications, underlying infrastructure needs to become more autonomous when it comes to assessing performance and taking steps to improve the user experiences when it starts to fall below expectations.

    With an increasingly techno-savvy knowledge workforce these days, tolerance for latency and non-availability are at an all-time low, as is patience when applications fail to deliver. This means IT is under the gun to not just make the cloud available, but make it better than what users are accustomed to in the data center.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Save

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles