SHARE
Facebook X Pinterest WhatsApp

Pentaho to Give Users More Control Over Big Data

How to Future-Proof Your Data Lake: Six Critical Considerations Recognizing that organizations will need to blend multiple types of data within the same analytics application, Pentaho, a unit of Hitachi, is gearing up to release an upgrade to its server platform that gives users more control over the overall data pipeline that is being fed […]

Written By
MV
Mike Vizard
Oct 5, 2015
Slide Show

How to Future-Proof Your Data Lake: Six Critical Considerations

Recognizing that organizations will need to blend multiple types of data within the same analytics application, Pentaho, a unit of Hitachi, is gearing up to release an upgrade to its server platform that gives users more control over the overall data pipeline that is being fed into an instance of the Pentaho business intelligence (BI) application.

Scheduled to be made available next week at the Pentaho World 2015 conference, Donna Prlich, vice president of product marketing and solutions for Pentaho, says version 6 of the company’s namesake Big Data integration and analytics platform makes it possible for organizations to locally refine data without having to invoke a massive data warehouse that in turn needs to pull data from a data lake such as Hadoop. Instead, organizations can pull data directly into Pentaho in a way that enables them to create virtual data sets using blended sources of data, says Prlich.

Rather than relying solely on massive Big Data repositories, Pentaho is making a case for using a local server to make it easier for end users to interrogate multiple types of data without having to generate calls out to a repository residing in a data center that might be anywhere in the world. Given the relatively low cost associated with deploying a local x86 server, Pentaho is making it easier to discover relationships between data sets locally in a way that also serves to foster more collaboration.

At the same time, Prlich notes that Pentaho has made a significant effort to include the data governance tools IT organizations need to make sure that individuals only gain access to data that they are authorized to use.

Obviously, when the term Big Data starts to get bandied about, all thoughts naturally move toward data warehouses and Hadoop clusters. But no matter how big the data actually gets, the consumption of that data, much like politics, is always going to be local.

MV

Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

Recommended for you...

Top RPA Tools 2022: Robotic Process Automation Software
Jenn Fulmer
Aug 24, 2022
Metaverse’s Biggest Potential Is In Enterprises
Tom Taulli
Aug 18, 2022
The Value of the Metaverse for Small Businesses
IT Business Edge Logo

The go-to resource for IT professionals from all corners of the tech world looking for cutting edge technology solutions that solve their unique business challenges. We aim to help these professionals grow their knowledge base and authority in their field with the top news and trends in the technology space.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.