The cloud brings many new capabilities to the enterprise table, but one of its primary advantages over traditional infrastructure is scale.
Of course, despite the wonders of virtualization and the software-defined revolution, the fact remains that you can only scale to the extent permitted by physical systems. So for many organizations, the question isn’t, “How much can I scale?” but rather, “How much can I scale without breaking my budget?”
New generations of hardware, however, are starting to ease the burden of scale-out architectures—particularly in storage. Seagate this week unveiled the Kinetic Open Storage platform, which the company says can cut the cost of building scale-out cloud infrastructure in half. The system utilizes object-based storage and does away with the traditional tier of storage servers by fostering direct communication between application and storage devices via a new open source API. The format has already drawn support from a number of key enterprise vendors, such as Dell, Huawei, Supermicro and EVault, and is already undergoing field tests at Yahoo.
Meanwhile, the former Convergent.IO, now relaunched as Coho Data, is out with a new storage platform that provides rapid scalability without impacting the performance of legacy infrastructure. The Coho DataStream solution consists of integrated network/storage modules that are governed by a unified software-controlled storage layer. Each module contains 40TB of storage with 180K IOPS capability and is preconfigured to be deployed into existing environments without producing the network bottlenecks that traditional storage expansions encounter. The system also provides a high degree of storage virtualization that allows workloads to be isolated within a single module.
At the same time, Tintri is out with a new line of VMstore appliances that provide broad scalability to match what is happening with server and storage architectures. The system does away with logical unit numbers (LUNs) in favor of a managed pool that can be tailored to the needs of individual virtual machines. The devices range from the 135TB T620 to the 33TB T650, but the company says that a more accurate way to describe their utility is in the number of VMs they can accommodate: 500 for the T620 and 2,000 for the T650. The system also comes with the Global Center management suite that can oversee up to 32 arrays and provides tiering software to allocate critical data to high-speed cache.
Scale-out storage architectures aren’t necessarily derived from advanced hardware configurations, however. As DataDirect Networks’ Tom Leyden points out, switching from file or block storage to object storage will likely emerge as a prerequisite for anyone looking to build a cloud. Object storage combines both the data and the metadata needed to manage volumes, which is key when it comes to the unstructured nature of Big Data loads. With no file system hierarchy to deal with, object storage is much more amenable to the storage pool approach that virtual and cloud environments require, and new data protection schemes are quickly approaching RAID-class performance. Already, designers are talking about enterprise-class object storage systems that scale into the petabytes as a single, integrated environment.
Storage has always been the laggard when it comes to advanced architectures, both in terms of speed and scale. Flash and in-memory solutions have gone a long way toward improving the speed side of the coin, now it seems the industry has finally turned its attention to scale.
If results from field trials are as favorable as the vendors report, the enterprise may find that building a scale-out private cloud is not so daunting after all.