It’s been said that Big Data and the cloud go together like chocolate and peanut butter, but it looks like more symbiosis is at work here than meets the eye.
While on the surface it may seem like the two developments appeared at the same time by mere coincidence, the more likely explanation is that they both emerged in response to each other – that without the cloud there would be no Big Data, and without Big Data there would be no real reason for the cloud.
Silicon Angle’s Maria Deutscher hit on this idea recently, noting that the two seem to be feeding off each other: As enterprises start to grapple with Big Data, they will naturally turn to the cloud to support the load, which in turn will generate more data and the need for additional cloud resources. In part, this is a continuation of the old paradigm that more computing power and capacity simply causes users to up their data requirements. Of course, the cloud comes with additional security and availability concerns, but in the end it is the only way for already stretched IT budgets to feasibly cope with the amount of data being generated on a daily basis.
And it seems that adoption of both technologies will only accelerate as the decade unfolds. According to MarketsandMarkets, Big Data is on pace to grow more than 25 percent per year through 2018, to hit $46.34 billion. Primary drivers will be demand for Big Data applications and the rise of unified appliances that will generate increasing amounts of machine-to-machine (M2M) data in the quest to optimize operational efficiency and enhance the user experience. At the same time, cloud providers are increasingly applying the service model to Big Data loads, offering tools like Analytics-as-a-Service, Hadoop-as-a-Service, and even Big Data-as-a-Service.
This is only natural, says Cloud Tweaks’ Daniel Price, given the cloud’s enormous propensity for scale. Big Data has a tendency to devour large amounts of resources in a relatively short time and then shrink back to near zero when the job is done. The cloud can accommodate this bursty lifestyle better than all but the largest enterprise infrastructures, and cloud providers have a greater incentive (profits) to implement both the specialized technology and the skillsets needed to provide top QoS for Big Data applications. If it’s a question of gaining the best analytical performance quickly and at minimal cost, most organizations will have to turn to the cloud.
Once on the cloud, however, a wide range of options is available when it comes to managing and analyzing the data you’ve collected. And although I’m loathe to start touting yet another cutesy image for a complex data architecture, some futurists are already talking about data “lakes” that provide massive stores to house “unclean” and “pure” data so it can be standardized and formatted for analysis. Not only would this lessen the resources needed for Big Data loads, but it would facilitate the sharing and aggregation of disparate data sets, enhancing the quality of the final analysis.
Big Data may be a top challenge right now, but the enterprise does have a lot of technology at its disposal to meet it. But technology alone will not be enough to drive real value from this investment. Turning data into useable knowledge will take brainpower – not just the ability to manage and coordinate analyses but the wisdom to recognize that results on paper do not necessarily equal “truth” even if they are produced by the most sophisticated tools known to man.
The cloud and Big Data can certainly be leveraged to help inform decisions, but that doesn’t mean there isn’t room for gut instinct.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.