Are we already entering the trough of disillusion with Big Data? Gartner Research Director Svetlana Sicular says she sees early indications that we are.
My, how time flies. It seems only yesterday that Big Data was shiny and new.
Then again, it was easy to get over-saturated on Big Data, since it became one of the few technology trends to make business headlines while technologists were just beginning to grapple with implementations.
If you’re familiar with Gartner’s hype cycle theory, you’ll know that the trough of disillusionment comes hot on the heels of the “peak of inflated expectations.” That’s followed by a slower curve in adoption and expectations as the trend moves to the “slope of enlightenment” and finally the “plateau of productivity.”
Eight Ways to Put Hadoop to Work in Any IT Department
Yes, as Rodney Brown, editor-in-chief of CloudEcosystems remarked, it does sound like marketing is playing a game of Dungeons & Dragons.
Still, in my 10-plus years of tech journalism, I’ve found it to be an inevitable prediction of what happens with tech trends from both a marketing and adoption perspective. You can see a picture of the full hype cycle on Sicular’s post.
Sicular points to two incidents to support her belief that people are getting disillusioned with Hadoop. One, she attended a meetup with reps from the major Hadoop distribution vendors, and people were complaining about MapReduce as a Hadoop bottleneck, while others were accusing Hadoop of being “primitive and old-fashioned.”
As Sicular points out, this isn’t just about people being overexposed to Hadoop press; they’re also frustrated by implementation problems.
“Meanwhile, my most advanced with Hadoop clients are also getting disillusioned,” Sicular writes. “They do not realize that they are ahead of others and think that someone else is successful while they are struggling. These organizations have fascinating ideas, but they are disappointed with a difficulty of figuring out reliable solutions.”
Primarily, these frustrations happen when companies apply Hadoop to more advanced cases of sentiment analysis, which require Hadoop to work “beyond traditional vendor offerings,” she writes. Those using Hadoop in new ways, such as linking a variety of unstructured data sources, also tend to be frustrated with their progress, she adds.
“Several days ago, a financial industry client told me that framing a right question to express a game-changing idea is extremely challenging: first, selecting a question from multiple candidates; second, breaking it down to many sub-questions; and, third, answering even one of them reliably,” she writes. “It is hard.”
There is one bright spot: Using Splunk for log analysis is “the only consistent success” reported by her clients.
“Why? Because Splunk is a (nice) tool,” she explains. “And plateau of productivity will be reached when tools and product suites saturate the market.”
Another low-risk Hadoop implementation is using Hadoop as a staging area for your data warehousing data, according to Rob Klopp, a data warehouse expert who works for the HANA group at SAP and maintains a personal blog.
Once you’ve learned to do that, you can expand to running transformations in ETL processes, he adds. Many companies have already found success with that approach.
“Hadoop uses inexpensive hardware and very inexpensive software. It can become your staging area and your raw data warehouse with little effort,” Klopp writes. “Using Hadoop as the staging area for your data warehouse data might provide a low risk way to get started with Hadoop… with an ROI… preparing your staff for other Hadoop things to come.”
If you’d like to read more about the current views on Big Data and Hadoop, Computerworld recently ran an article referencing Sicular’s post and an Ovum report analyzing mentions of Big Data on Twitter.