In just about every survey taken these days, enterprise executives express a strong desire to migrate more workloads to the cloud due to its broad scalability, low cost and ease of operation. At the same time, though, there are strong reservations about security, availability and performance, particularly as data loads increase.
So what we have here is a classic Catch-22: We want more cloud, but the more we consume, the less useful it becomes.
The key aspect in all this is the multitenancy that is built into most cloud architectures. This provides for a highly efficient distribution of resources, but at the same time increases the risk that data and application requirements won’t be met should traffic spike dramatically. A recent survey from Internap sums up this conundrum nicely: Overhead in multitenant environments tends to rise to unacceptable levels as volumes rise, which makes the cloud a fickle partner at best when it comes to Big Data analytics and other key initiatives.
It’s for this reason that many organizations are seeking out more nuanced approaches to the cloud, primarily ones that combine solid performance and extensive capabilities, even if it comes at the expense of operational efficiency or even low costs. A case in point is bare-metal provisioning, which places a cloud environment, such as IaaS or PaaS, on top of dedicated server, storage and network resources within the host provider’s data center. As Business Cloud News’ Jonathan Brandon points out, many database and I/O-sensitive applications don’t function well in virtual environments anyway, and in fact can be supported more effectively, and at less cost, on a bare-metal cloud platform vs. a virtual, multitenant system.
This is why we’re starting to see top cloud providers like Rackspace increase their reliance on bare-metal offerings for enterprise customers. With the new OnMetal service, the company says it can provide dedicated hardware in a few minutes using the same provisioning tools that clients use to build OpenStack-based virtual infrastructure. The service is geared toward applications like web processing, RAM-based caching and database functions, with hardware devised largely along the Open Compute Project (OCP) reference design released by Facebook, albeit with a few tweaks to incorporate all-solid-state design and greater in-memory server capacity.
Meanwhile, IBM is leveraging its SoftLayer platform as a means to drive greater bare-metal access in the cloud. The company recently teamed up with OpenStack developer Mirantis Inc. to enable private OpenStack service using dedicated hardware that can be leased for about $60 per day. The idea is to provide a more stable cloud environment for more predictable, yet data-intensive, workloads while still enabling extensive host management for the enterprise cloud consumer. The service can range from two to 50 servers spread across data centers in San Jose, Singapore and Amsterdam, with virtual machine support available through a vCPU, 1 GB of RAM and 5 GB of storage. Additional data centers are expected to be brought online later in the year.
Some may wonder why it makes sense to bother with bare metal in the cloud when you can simply deploy your own server at home. The answer comes down to utilization. If you have the workload to keep a dedicated in-house server busy for the majority of its lifecycle, by all means, buy one. But if Big Data or other applicable tasks are infrequent, overall costs in the cloud will still be lower even if they come at a premium compared to shared architectures.
In the end, however, infrastructure decisions like these should be based on organizational and user/application needs, rather than technology.