Data Lakes: 8 Enterprise Data Management Requirements

Email     |     Share  
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
Next Next

Transformations and Analytics

Not only are systems like Hadoop more flexible in the types of data that can be stored, they are also more flexible in the types of queries and computations that can be performed on the stored data. SQL is a powerful language for querying and transforming relational data, but is not appropriate for queries on non-relational data and for employing iterative machine-learning algorithms and other arbitrary computations. Tools like Hive, Impala and Spark SQL bring SQL-like queries to Hadoop data. However, tools like Cascading, Crunch and Pig bring more flexible data processing to Hadoop data. Most of these tools are powered by one of the two most widely used data processing engines: MapReduce or Spark.

In the data lake, we see three types of transformations and analytics: simple transformations, analytics queries and ad-hoc computation. Simple transformations include tasks such as data preparation, data cleansing and filtering. Analytic queries will be used to provide a summary view of a data set, perhaps cross-referencing other data sets. Finally, ad-hoc computation can be used to support a variety of algorithms, for example, building a search index or classification via machine learning. Often, such algorithms are iterative in nature and require several passes over the data.

2016 is the year of the data lake. It will surround and, in some cases, drown the data warehouse, and we'll see significant technology innovations, methodologies and reference architectures that turn the promise of broader data access and Big Data insights into a reality. But Big Data solutions must mature and go beyond the role of being primarily developer tools for highly skilled programmers. The enterprise data lake will allow organizations to track, manage and leverage data they've never had access to in the past. New data management strategies are already leading to more predictive and prescriptive analytics that are driving improved customer-service experiences, cost savings and an overall competitive advantage when there is the right alignment with key business initiatives.

So whether your enterprise data warehouse is on life support or moving into maintenance mode, it will most likely continue to do what it's good at for the time being: operational and historical reporting and analysis (a.k.a. rear-view mirror).

As you consider adopting an enterprise data lake strategy to manage more dynamic, poly-structured data, your data integration strategy must also evolve to handle the new requirements. Thinking that you can simply hire more developers to write code or rely on your legacy rows-and-columns-centric tools is a recipe to sink in a data swamp instead of swimming in a data lake. In this slideshow, Craig Stewart, VP product management at SnapLogic, has identified eight enterprise data management requirements that must be addressed in order to get maximum value from your Big Data technology investments.

 

Related Topics : APC, Resellers, Data Replication, Extract Transform and Load, Structured Data Integration

 
More Slideshows

mobile87-190x128.jpg How to Find Business Value in Your Data Through Modernization

Data only becomes a meaningful and valuable asset when organizations can transform it into actionable insights. ...  More >>

LiaisonTechUncontrolledData0x 5 Steps to Wrangle Uncontrolled Data Flow

As the availability of data exponentially increases, unprecedented opportunities exist to do all kinds of amazing things, but these opportunities also come with data wrangling challenges. ...  More >>

Misc70-190x128.jpg 5 Data Warehouse Design Mistakes to Avoid

If you are designing a data warehouse, you need to map out all the areas where there is a potential for your project to fail, before you begin. ...  More >>

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.