MapR Alliance with Fusion-io Puts Hadoop on SSD Steroids

Mike Vizard

For organizations that find Hadoop performance to be frustratingly slow, MapR Technologies has an answer.

This week at the Hadoop Summit 2013 conference, MapR announced a partnership with Fusion-io under which its distribution of Hadoop will run on solid-state drives (SSD) from Fusion-io.

According to Jack Norris, chief marketing officer for MapR, SSD cards from Fusion-io running on servers will boost Hadoop performance by 25 percent, which will considerably narrow the performance gap that exists between Hadoop and rival database systems that run in-memory.

And when you take into consideration that Hadoop costs a few hundred dollars per terabyte to deploy while still being able to deliver a million read IOPs, Norris says it’s clear that Hadoop’s time to run applications in production has come.

Part of the reason for that, says Norris, is that the Hadoop distribution from MapR is designed to write directly to disk, which eliminates dependencies on Java or the Linux file system that slows the performance of other Hadoop distributions.

Because Hadoop can support massive amounts of data, Norris notes that IT organizations also no longer need to bother with extract, transform and load (ETL) processes, and the MapR distribution of Hadoop includes support for replication, snapshots and other data protection capabilities.

Taken together, Norris says all those capabilities mean Hadoop allows IT organizations to get away from the continuous proliferation of data silos that contribute so much to the rise of the total cost of IT ownership that has plagued enterprise IT for decades.

It’s not clear yet to what degree Hadoop will be used across the enterprise. There is a general consensus that Hadoop makes an excellent low-cost alternative for off-loading data from SQL-based data warehouses that are considerably more expensive to run. But there’s also a case to be made for standardizing on Hadoop for most applications because it allows those applications to work with raw unstructured data in multiple forms of “polyglot persistence” versus going to the expense of having to first structure data and then create schemas using traditional SQL databases.

While that transformation may take the better part of a decade to play out, the one thing that is for certain is that in the years to come, IT organizations will be seeing a whole lot more of Hadoop across the enterprise.

Add Comment      Leave a comment on this blog post
Jul 1, 2013 12:17 AM big data Training big data Training  says:
Mapr releasing new products by joining hands with other firms..Visit Reply
Aug 19, 2015 2:25 AM sai sai  says:
Hadoop really works fast with the inclusion of mapR technologies inclusion as a data warehousing relief. Much is discussed about this at hadoop training in hyderabad which a guru in IT training Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.