Click through for an overview of the Hadoop stack and gain a better understanding of its capabilities, as identified by Loraine Lawson.
Everyone tends to focus on the “big” in Big Data, so much so that it’s easy to lose focus on the fact that Hadoop is really about data. Let’s regroup for a minute and really look at what’s going on with the data on Hadoop.
First, there’s the core. When people say “Hadoop,” they’re usually referring to the Hadoop core, which Loraine Lawson explained:
The Hadoop Distributed File System. What’s it doing with the data? It’s distributing it on nodes and storing it there.
MapReduce. This does the real work in the Hadoop core. If you want to run a process or computation on the data, it “maps” that out to the nodes and then runs the process, and “reduces” the results to your answer. So, it’s processing the data.
Now, if you’re familiar with data at all, you’ll notice there are a whole lot of things missing from that equation, such as:
This is where the growing list of Apache Hadoop-related projects comes into play.
These projects go by an odd assortment of names: Pig, Hive, Flume, Zookeeper, but they’re often short-changed when we talk about Hadoop. Loraine has seen them referred to as the “Hadoop stack,” though some programmers prefer “Hadoop ecosystem." Forrester refers to them as “functional layers.”
For the most part, they’re of interest to developers more than executives, but hopefully a high-level view of these solutions will add some depth to your understanding of Hadoop and its capabilities.
Here are a few of the more common names you’ll hear.
An eWEEK Property
Copyright 2020 TechnologyAdvice All Rights Reserved.
Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.