Data management challenges have significantly slowed adoption of cloud computing in the enterprise, with relatively few enterprises having adopted cloud computing for core workloads outside of a few SaaS use cases like email or customer account management. There are several reasons for this, but meeting data management requirements of business-critical enterprise applications remains a primary challenge on many cloud platforms.
Software-defined storage (SDS) can help meet the performance, scalability and availability requirements for these applications. In this slideshow, software-defined storage specialists from Sanbolic take a closer look at the data management elements that are driving a new era of cloud computing.
Click through for a closer look at the data management elements that are driving a new era of cloud computing, as identified by Sanbolic.
Business-critical workloads and cloud computing
Multitenant cloud computing has been enabled by compute virtualization. Most cloud architectures depend on a shared-nothing scale-out architecture, which may not meet the needs of some tier-one and tier-two applications. Business-critical workloads often involve big data sets, and require high bandwidth, low latency, snapshots or other features. Therefore, it can be problematic to partition workloads into cloud-sized chunks.
However, data scalability, performance and availability will vary by cloud provider. AWS and Azure, for example, allow block storage volumes of up to a terabyte. Azure gives a 99.9 percent availability guarantee for storage, which may be below SLAs for tier-one workloads. An AWS high-performance volume can deliver up to 4000 IOPS, adequate for many tasks but not all tier-one workloads. In addition, the management tools used to manage storage on cloud platforms may differ from on-premise tools, presenting an obstacle to seamless hybrid workload deployments.
Data management platform requirements
Cloud storage often refers to key-value object storage, which has the advantage of being very scalable, simple and inexpensive. Many use cases like image or archiving can effectively use object storage, but tier-one and tier-two enterprise workloads typically require frequent access and updates of stored data, and are designed around block or file storage. Features including mirroring for availability, snapshots and cloning, and quality of service management are typical for on-premise storage and are important considerations when moving these workloads onto cloud-based resources.
Moving data management into the software layer
Software-defined storage (SDS) can abstract physical, virtual or cloud storage and move storage services into a software layer that can run on standard x86 server instances, either on-premise or in the cloud. Software-defined storage enables the creation of “server SANs” using the local storage of cloud server instances — aggregating performance and spanning “availability zones” to protect against outages. It can also be used to provide scale-out, unlike cloud-based block storage offerings with feature and volume size limitations.
Key software-defined storage features
SDS enables data to be aggregated and managed centrally. Enterprises can now use SDS platforms to provide storage services such as various RAID levels, snapshots, cloning, QoS, and replication delivered on a consistent basis across all storage types.
With SDS, requirements of tier-one and tier-two workloads can be met, and a common storage management platform can be provided across on-premise and cloud infrastructure. Furthermore, OpenStack integration facilitates deployment on many cloud platforms.
Hybrid management spanning on-premise and cloud
Keep in mind that hybrid or disaster recovery deployments will require a data set to span on-premise and cloud storage resources. Moving a large data set to a cloud platform can still be time consuming and subject to the bandwidth of the network link. Many cloud providers make provisions to physically send data to their facility during initial migration. Hybrid on-premise/cloud workloads require data to then be kept in sync and managed across both locations. However, true software-defined storage platforms deliver native mirroring and replication, and also support active-active application deployments across multiple data centers, if network latency permits.
Because a common data platform is used, both on-premise and in the cloud, there is no compatibility issue between on-premise and cloud-based storage resources and management tools with SDS.
Fluid IT resources
Business-critical workloads that utilize large dynamic data sets and require low latency data access have largely remained tethered to silos created by hardware-defined storage. Software-defined storage eliminates these silos on-premise and provides greater capability to migrate these applications to the cloud when economics and business strategy dictate. Virtualization of compute resources, together with software-defined storage, provides a fluid IT infrastructure with the capability to flexibly deploy workloads across heterogeneous infrastructure, which increasingly includes cloud resources. SDS platforms are a key tool for achieving this objective.
The SDS evolution
Moving storage services into a software layer that can run on standard server instances, both on-premise and in the cloud, eliminates silos and better supports the scale and speed needed in today’s business environment. As forward-thinking enterprises increasingly tap the power of SDS solutions to span workloads across data centers, as well as public cloud resources, they will be able to centrally manage data and deliver the performance, data protection and SLA management required by business-critical applications.