More

    The Hybrid Dilemma: Where to Put Your Applications

    Slide Show

    Selecting a Cloud Service: 5 Questions to Ask Prospective Providers

    Following up on last week’s discussion on building a successful hybrid cloud, it seems that the true measure of success is not in the technology but the functionality. Once all the pieces are in place, how is the enterprise to leverage this new computing entity to its full potential?

    The ideal, of course, is to treat both local and distributed infrastructure as a single entity, offering seamless access to data and services no matter where they reside. But the fact remains that the private and public components of the cloud are different creatures that bring unique capabilities to the table. So at some point, organizations will have to determine for themselves what services should exist over the wide area and what should stay at home.

    According to Panzura’s Barry Phillips, most data center applications should run reasonably well in the cloud, and vice versa. However, some key functions will make the transition better than others. These include anti-virus scanning, rendering, simulation and search/indexing. More critical functions that contain financial, medical and other types of privileged information should probably stay behind the corporate firewall and, under regulatory and compliance, in many cases might have to. In order to gain the most flexibility from a hybrid environment, the enterprise should retool as many apps as possible so they can function seamlessly across internal and external infrastructure.

    A typical hybrid model will incorporate a number of technologies and methodologies, says BizTech Magazine’s John Edwards, including virtualized and cloud-based infrastructure in the data center, plus various hosted/collocated resources and SaaS/PaaS/IaaS layers elsewhere. Since one of the key benefits of going hybrid is to manage traffic in highly scaled environments, a robust application delivery mechanism is a must. In this way, organizations can support managed apps, virtual desktops and even entire virtual workspaces with rapid, self-service provisioning and broad collaborative capabilities, especially during “boot storm” periods like the beginning of the work day when multiple users are logging on at the same time.

    As mentioned earlier, though, integration between on-premises and distributed data environments is key, which is why data center operating system (DCOS) developers like Mesosphere are warming up to open source. The company recently released the DC/OS version of its software with the backing of Cisco, Microsoft, HPE and other leading platform providers. The aim is to provide single-click, app-store-style provisioning of advanced, distributed database applications like Hadoop and Apache across multivendor hybrid infrastructures. This will present the hybrid cloud as a single compute environment capable of supporting containers, microservices, Big Data applications, and virtually anything else the enterprise needs to support its business model, essentially placing the average organization on the same operational footing as Amazon, Google and other hyperscale entities.

    Indeed, many of the scale-out applications required of the emerging digital, service-based economy are not feasible without a tightly integrated hybrid cloud. Constructs like the data warehouse and the data lake, for example, require broad scalability and direct access to critical data if they are to perform as needed in the new data ecosystem. Teradata recently furthered this goal by adding a number of key capabilities to its Hybrid Cloud to enable synchronized data query and analysis capabilities for Big Data and IoT applications. The system now supports the company’s IntelliFlex MPP (massively parallel processing) architecture and Amazon Web Services, plus a broad range of professional services for critical functions like migration, database conversion and systems management.

    The hybrid cloud is all about uniting disparate infrastructure under a single operating architecture, but this does not mean decisions regarding where and how applications and services are supported are no longer relevant. Instead, the hybrid cloud should provide a wider range of options regarding access, scale, security and numerous other components so that no matter what the challenge, there is a way to confront it.

    Without an integrated hybrid infrastructure, organizations may end up spending untold amounts of money on the cloud only to find themselves with the same silo-based architectures they were hoping to get rid of.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles