More

    Confronting the Hyperscale Storage Challenge

    Enterprises looking to scale out their own data infrastructure quickly run into a problem: While more computing and networking capacity can be squeezed out of existing hardware, storage cannot.

    Sure, you can make improvements to I/O, IOPS, management capabilities and the like, and even defrag drives and deduplicate data, but when it comes to storing a bit of data on a piece of magnetized metal or a solid state cell, the only way to increase capacity is to deploy more hardware.

    So regardless of whether the goal is to build an internal cloud or a hyperscale analytics engine, the only solution to a lack of storage is to, well, add more storage. Fortunately, there are ways to do this without pushing the infrastructure footprint over the edge or breaking the capital or operating budgets.

    One of the newest innovations comes from Facebook’s Open Compute Project, which recently introduced the “Bryce Canyon” storage solution that ups the density of hard disks in standard rack systems. It does this by placing the drives upright in the chassis, rather than laying them flat in the normal fashion. In this way, the enterprise can cram 72 drives into the same space that the previous architecture, “Honey Badger,” used for 60. While on the surface this would appear to increase the power and heat load in the rack as well, Facebook engineer Eran Tal explained to Datacenter Frontier that it also provides for better airflow from underneath the chassis using larger fans.

    Solid state storage is also an option in scale-out architectures, and the price cuts on many solutions are starting to bring parity to hard disk over the full lifecycle of the environment. Kingston Digital’s new Data Center PCIe 1000 (DCP1000) device, for example, tops out at 3.2 TB and allows organizations to quickly add capacity to legacy server and storage infrastructure without a forklift upgrade. The system features four 8-channel controllers to deliver 1.25 million IOPS from a single drive, which makes it suitable for high-performance applications such as database optimization, OLTP and desktop virtualization. Hyperscale developer Liqid Inc. has already incorporated the device into its Composable Infrastructure solution.

    Even greater capacities can be found on solid state solutions aimed at long-term and archival platforms. NGD Systems recently released the Catalina SSD that uses 3D TLC Flash to provide up to 24 TB on a PCIe edge card form factor. Although it provides high I/O like any SSD, the device is aimed at cold storage applications due to its ultra-low power requirements ­– less than .65watts per TB provided by the company’s patented Elastic Flash Transition Layer (FTL) algorithm. This makes the device more suited to read-intensive applications like content delivery networks (CDNs), media servers, and other forms of rich media.

    In this age of the cloud, however, organizations don’t even need to scale out their own internal storage when it is much easier to use someone else’s. Software-defined storage developer OSNEXUS recently launched a Capacity as a Service license program that allows organizations to dynamically manage and provision high-performance storage for a wide range of applications. The service uses the QuantaStor Cloud License Service portal to automate, install and provision software-defined storage for enterprises, higher education providers and other users. The service features dynamic, self-service provisioning, capacity quota management and internal chargeback capabilities.

    At the end of the day, the worldwide data industry will have to provision more storage in order to support the loads being generated by the Internet of Things and the digital economy in general. But the days of simply expanding the array and over-provisioning capacity to the nth degree are over.

    Going forward, the enterprise will have to make sure that it not only has enough storage under management, but that it can dynamically scale along with the workload.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles