More

    DAS Rules in Hyperscale, For Now

    Slide Show

    Five Innovations in the Data Storage Landscape

    At one time, it appeared that direct attached storage (DAS) was heading the way of the dinosaur. Networked storage was where it was at because you could manage a centralized array of disk and tape drives and distribute data to a wide variety of endpoints. Not only did this provide enormous capacity when required, it fostered a higher level of data interactivity among users as well.

    Times have changed, however, and with modular, hyperscale infrastructure on the rise, it appears that keeping storage close to processing resources is proving to be highly advantageous to scale-out database applications, with advanced network fabric technologies providing the collective capacity and data availability requirements of modern workloads.

    This is reflected in the variety of solutions hitting the channel aimed at DAS architectures, starting with higher capacity disk drives. Toshiba recently announced that it had hit the 1 TB/square inch plateau on a conventional disk drive platform, ushering in the possibility of a standard drive that tops 3 TB. The company did not offer many details, other than to say it utilizes a conventional Perpendicular Magnetic Recording (PMR) design rather than the emerging Shingled Magnetic Recording (SMR) approach. The company is expecting to release the drive later this year.

    DAS has caught on particularly well in large Hadoop clusters where storage and retrieval of Big Data workloads from a centralized repository would be problematic at best. But as Hadoop evolves beyond simple MapReduce queries and the like to take on more complicated tasks, it is questionable whether DAS will be able to keep up, says Infostor’s Paul Rubens. For one thing, DAS lacks key enterprise features like compliance and regulatory controls, multi-site data protection and collaborative workflow integration. Development around these problems is proceeding on a number of fronts, including various Hadoop storage management solutions, as well as advanced SAN/NAS/appliance solutions, virtual or containerized Hadoop instances and cloud-based clustered storage.

    Leveraging DAS in scale-out applications is also emerging as a key strategy among traditional data center vendors. Dell has tapped Nutanix for the storage management components of its XC series appliance, incorporating a distributed file system within each node to aggregate DAS resources across the cluster. In this way, storage pools can be accessed by all hosts and the entire environment is outfitted with a fault-tolerant backup and replication system that can be easily expanded by adding more modules to the cluster.

    Data Analytics

    This works fine for all of your DAS resources, but what if you wanted to incorporate legacy SAN/NAS architectures as well, and by extension the apps and test/dev capabilities they support? A startup called Springpath is promising to do just that with a distributed file system that supports block, file and object storage, plus advanced Hadoop applications. As well, the system supports traditional virtualization and new container-based approaches like Docker, and it does so without sacrificing speed, performance and advanced enterprise features as you would with a standard multi-layered management stack.

    No matter how you arrange your storage in scale-out environments, the key element will be adequate networking. Centralized repositories will require wide data paths to shuttle loads in and out of the cluster, while attached solutions will need a high degree of flexibility to ensure all data is available to the processing centers that need it.

    There are cost and performance trade-offs to every solution (as usual), and by and large the correct approach won’t center on which technology is superior but which offers the highest level of support to the application(s) at hand.

    It is nice to think that in the software-defined universe, any hardware resource can be repurposed for any need, but when it comes to storage in particular, it just isn’t so.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles