More

    MapR Alliance with Fusion-io Puts Hadoop on SSD Steroids

    For organizations that find Hadoop performance to be frustratingly slow, MapR Technologies has an answer.

    This week at the Hadoop Summit 2013 conference, MapR announced a partnership with Fusion-io under which its distribution of Hadoop will run on solid-state drives (SSD) from Fusion-io.

    According to Jack Norris, chief marketing officer for MapR, SSD cards from Fusion-io running on servers will boost Hadoop performance by 25 percent, which will considerably narrow the performance gap that exists between Hadoop and rival database systems that run in-memory.

    And when you take into consideration that Hadoop costs a few hundred dollars per terabyte to deploy while still being able to deliver a million read IOPs, Norris says it’s clear that Hadoop’s time to run applications in production has come.

    Part of the reason for that, says Norris, is that the Hadoop distribution from MapR is designed to write directly to disk, which eliminates dependencies on Java or the Linux file system that slows the performance of other Hadoop distributions.

    Because Hadoop can support massive amounts of data, Norris notes that IT organizations also no longer need to bother with extract, transform and load (ETL) processes, and the MapR distribution of Hadoop includes support for replication, snapshots and other data protection capabilities.

    Taken together, Norris says all those capabilities mean Hadoop allows IT organizations to get away from the continuous proliferation of data silos that contribute so much to the rise of the total cost of IT ownership that has plagued enterprise IT for decades.

    It’s not clear yet to what degree Hadoop will be used across the enterprise. There is a general consensus that Hadoop makes an excellent low-cost alternative for off-loading data from SQL-based data warehouses that are considerably more expensive to run. But there’s also a case to be made for standardizing on Hadoop for most applications because it allows those applications to work with raw unstructured data in multiple forms of “polyglot persistence” versus going to the expense of having to first structure data and then create schemas using traditional SQL databases.

    While that transformation may take the better part of a decade to play out, the one thing that is for certain is that in the years to come, IT organizations will be seeing a whole lot more of Hadoop across the enterprise.

    Mike Vizard
    Mike Vizard
    Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles