Multitenant cloud computing has been enabled by compute virtualization. Most cloud architectures depend on a shared-nothing scale-out architecture, which may not meet the needs of some tier-one and tier-two applications. Business-critical workloads often involve big data sets, and require high bandwidth, low latency, snapshots or other features. Therefore, it can be problematic to partition workloads into cloud-sized chunks.
However, data scalability, performance and availability will vary by cloud provider. AWS and Azure, for example, allow block storage volumes of up to a terabyte. Azure gives a 99.9 percent availability guarantee for storage, which may be below SLAs for tier-one workloads. An AWS high-performance volume can deliver up to 4000 IOPS, adequate for many tasks but not all tier-one workloads. In addition, the management tools used to manage storage on cloud platforms may differ from on-premise tools, presenting an obstacle to seamless hybrid workload deployments.
Data management challenges have significantly slowed adoption of cloud computing in the enterprise, with relatively few enterprises having adopted cloud computing for core workloads outside of a few SaaS use cases like email or customer account management. There are several reasons for this, but meeting data management requirements of business-critical enterprise applications remains a primary challenge on many cloud platforms.
Software-defined storage (SDS) can help meet the performance, scalability and availability requirements for these applications. In this slideshow, software-defined storage specialists from Sanbolic take a closer look at the data management elements that are driving a new era of cloud computing.