More

    Building Scale into the Private Cloud

    Slide Show

    5 VM Routing Mistakes Made in Private Clouds

    If all things were equal between the private and public cloud, few enterprises would migrate their workloads to public infrastructure. All things are not equal, however, so IT executives are constantly weighing the security and availability concerns of the public cloud with higher capital costs and lack of scale on the private side.

    But while public providers have made a lot of noise touting their improved encryption and service reliability, an equally strong movement is brewing to make private cloud infrastructure more scalable, easier to deploy and less expensive.

    The private cloud requires private infrastructure, of course, so deploying resources at scale remains a key challenge. (Yes, hosted is an option, too, but I’m talking about true in-house private clouds.) This is why emerging platform providers like Tintri are pushing the envelope when it comes to deploying hefty resource architectures without crushing the budget. The company’s new VMstore T5000 All-Flash Series appliance supports upwards of 160,000 virtual machines and can be outfitted with SaaS-based predictive analytics and other tools to enable advanced capacity and performance models to suit Apache Spark, ElasticSearch and other Big Data engines. And as is the company’s modus operandi, the system scales at the VM level rather than the LUN level to enable greater flexibility when matching resources to workloads.

    Emerging hyperconverged infrastructure (HCI) platforms from HP, Dell and others should also provide a solid foundation for scale-out private clouds, says Diginomica’s Kurt Marko, although it’s odd that few enterprises have pursued the strategy as yet. Rather, most HCI deployments are designed as purpose-built virtualization appliances dedicated to traditional workloads. Eventually, it will become obvious that the technology’s inherent simplicity, expandability and centralized management capabilities are tailor-made for IaaS and PaaS software stacks, but it will likely take a tightly bundled HCI/cloud vendor solution to make this plain to the enterprise.

    A key challenge in developing scale-out infrastructure is storage, of course, particularly when it comes to meeting the needs of advanced search and analytics. The latest version of SwiftStack’s Object Storage platform, 4.0, aims to meet these needs in ways that file-and-block storage cannot. The platform provides universal access to scale-out storage architectures, while at the same time enabling full synchronization with Amazon’s S3 service over a distributed footprint. In addition, it offers integrated load balancing to reduce latency and management overhead, as well as a new suite of capacity planning and data migration tools. At the same time, it provides a file-based front end for easy integration into legacy environments.

    But because hardware costs remain the key burden in infrastructure deployment, commodity solutions will likely be the best way to support a private cloud at reasonable scale. The key is to layer it with an integrated system that can tie disparate resources together. This is what platforms like Stratoscale’s Symphony are attempting to do in support of full software-defined data center (SDDC) architectures. The system is hardware-agnostic, enabling out-of-the-box OpenStack installations that provide cloud-scale economics to data centers of any size without extensive in-house expertise on the part of the enterprise.

    Even the largest private cloud will not meet the scale of Amazon or Microsoft at this point, so the goal is not to supplant public services with private ones. A scale-out private cloud is intended to provide the same ease-of-use and flexibility for internal applications that users can get from public sources. In this way, issues like shadow IT and loss of data interoperability are diminished because users can always turn to their employer for cloud services first. As long as effective governance and data management is maintained, most organizations should find that the limited scale of private infrastructure is more than adequate for workloads that can’t be sent beyond the firewall.

    Ultimately, the scale of the private cloud comes down to a matter of what is necessary, not what is achievable.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles