Visualizing Massive Amounts of Big Data

Michael Vizard

At the end of the day, the whole point of a business intelligence (BI) application is to make it easier to discern patterns and trends that would otherwise not be obvious. As such, the competition between BI applications is ultimately going to come down to which one best fulfills that mission, especially in an era where Big Data is making massive amounts of information readily available.


Pentaho upped its game in that regard with release today of version 4.5 of its namesake open source BI application, which enhances the application's core visualization engine with new geo-mapping, heat grids, scatter/bubble chart visualizations, interactive visual analysis capabilities such as lasso filtering, zoom and attribute highlighting on all chart types, and a variety of reporting enhancements.

 

 

In addition, Pentaho Chief Technology Evangelist Ian Fyfe says that Pentaho 4.5 also includes a number of in-memory cache performance improvements to support these compute-intensive visualization capabilities. At the same time, Pentaho is expanding the number of data sources it supports to include simplified deployment options for Hadoop clusters, including a new Debian RPM for MapR and support for Hadoop's distributed cache, expanded NoSQL database integration including read, write and reporting with Apache Cassandra, DataStax and MongoDB databases.

 

Pentaho has long argued that the combination of its BI application and a built-in framework for integrating data makes the combined offering a more compelling approach to BI applications that require IT organizations to acquire separate extract, transform and load (ETL) tools. As the volume of data that end users need to manipulate continues to increase thanks to the rise of NoSQL data management frameworks such as Hadoop, that argument becomes just that much stronger.

 


The good news is that rather than extrapolating insights based on a thin slice of data, NoSQL databases make it easier and more affordable to base decisions on large sets of complete data. The challenge going forward is finding applications that include the visualization capabilities to actually absorb and present all that information in a way that it can easily be consumed.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.