More

    Kicking Storage into Hyper Drive

    Slide Show

    Holographs, Liquid-State and DNA: The Future of Data Storage

    If the data center truly is the dinosaur of the digital world, then it looks like it is about to enter its Jurassic period, where size suddenly leaps to a massive scale.

    So-called hyperscale infrastructure is driven by the need to handle massive data volumes at the lowest possible price point. To date, most Web-facing enterprises like Google and Facebook have chosen to build custom scale-out architectures using commodity components produced by original device manufacturers (ODMs) based largely in the Pacific Rim. Of late, however, traditional IT vendors have been upping their hyperscale game in a bid to capitalize on the Big Data requirements that are encroaching upon traditional enterprise and cloud infrastructure. And as you would expect, much of this activity is centered on storage.

    A case in point is Dell, which recently took the wraps off the DCS XA90 storage array, capable of packing 720TB into a 4U chassis. Already, the company is talking about deploying the system in its Modular Data Center (MDC) platform that would deliver a stunning 220PB of capacity and utilize dual Xeon E5-2600v3 processors to perform high-speed analytics and archival functions. Tracy Davis, vice president and general manager of the Dell DCS team, says the system is large enough to hold the estimated data capacity of 90 human brains. Can anyone say The Matrix?

    At the same time, Fujitsu has launched the Eternus CD10000 system that will initially provide 56PB over 224 nodes but, according to the company, has no upper limit when it comes to scalability. The system features seamless introduction of new nodes and subsequent generations will be designed with backward compatibility to the current version, allowing enterprises to scale at will. Management is provided via open source software from Red Hat and Inktank Ceph Enterprise to provide a single view of block, object and file storage in each cluster, with built-in fault tolerance and self-healing for added availability and data protection.

    Storage scalability is not just about capacity, however. Issues like I/O, data management and interoperability can make or break scale-out architectures. A key concern is the duplication and mirroring techniques that populate most file systems, says Enterprise Tech’s Timothy Prickett Morgan. In a hyperscale environment, these scraps of data can quickly turn into mountains that can severely affect access, latency and overall costs. This is why many leading hyperscale providers like Amazon use object-based storage even though this inhibits the use of services like S3 for enterprises looking to leverage legacy file systems for hybrid cloud architectures. Then again, it is also the reason why companies like Data Direct Networks are offering local object-based storage platforms like the Web Object Scaler with built-in support for the S3 protocol.

    Data

    It would also be nice if hyperscale storage architecture could do away with all of the networking hardware and software that exists in conventional infrastructure. Some organizations are moving in this direction with modular systems connected via the PCIe bus or even the direct memory access (DMA) bus. However, Seagate is taking a different tactic with its Kinetic hard disk drive that leverages standard Ethernet with an open source object API to allow drives to communicate directly without having to go through file systems, storage servers and other midpoints. The company says it can cut TCO in half and increase density to 800 drives in a 40U rack. It also enables server and storage resources to be scaled independently to avoid overprovisioning one in order to raise performance in the other.

    The image of giant data centers stuffed to bursting with modular, commodity hardware is no longer the stuff of science fiction given the way virtualization and advanced networking has largely eliminated the need for centralized data resources to be in close proximity to end points. What is still unclear, however, is whether enterprises, particularly large ones, will pursue their own hyperscale infrastructure or be content with third-party resources.

    Immense scale can go a long way toward lowering costs, but it does little to address other crucial concerns like security and availability. And even today, ensuring consistent, reliable performance still trumps a lower operating budget.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles