When enterprise executives try to wrap their minds around the challenges of Big Data, two things quickly become evident: Big Data will require Big Infrastructure in one form or another, but it will also require new levels of management and analysis to turn that data into valuable knowledge.
Too often, however, the latter part of that equation gets all the attention, resulting in situations in which all the tools are put in place to coordinate and interpret massive reams of data only to get bogged down in endless traffic bottlenecks and resource allocation issues. However, Big Infrastructure usually requires Big Expenditures, so it makes sense to formulate a plan now to accommodate the kinds of volumes that are expected to become the common workloads of the very near future.
To some, that means the enterprise will have to adopt more of the technologies and architectures that currently populate the high-performance computing (HPC) world of scientific and educational facilities. As ZDNet’s Larry Dignan pointed out this month, companies like Univa are adapting platforms like Oracle’s Grid Engine to enterprise environments. Company CEO Gary Tyreman notes that it’s one thing to build a pilot Hadoop environment, but quite another to scale it to enterprise levels. Clustering technologies and even high-end appliances will go a long way toward getting the enterprise ready to truly tackle the challenges of Big Data.
Integrated hardware and software platforms are also making a big push for the enterprise market. Teradata just introduced the Unified Data Environment and Unified Data Architecture solutions that seek to dismantle the data silos that keep critical disparate data sets apart. Uniting key systems like the Aster and Apache Hadoop releases with new tools like Viewpoint, Connector and Vital Infrastructure, and wrapped in the new Warehouse Appliance 2700 and Aster Big Analytics appliances, the platforms aim for nothing less than complete, seamless integration and analysis of accumulated enterprise knowledge.
As I mentioned, though, none of this will come on the cheap. Gartner predicts that Big Data will account for $28 billion in IT spending this year alone, rising to $34 billion next year and consuming about 10 percent of total capital outlays. Perhaps most ominously, nearly half of Big Data budgets will go toward social network analysis and content analytics, while only a small fraction will find its way to increasing data functionality. It seems, then, that the vast majority of enterprises are seeking to repurpose existing infrastructure to the needs of Big Data. It will be interesting to note whether future studies will illuminate the success or failure of that strategy.
Indeed, as application performance management (APM) firm OpTier notes in a recent analysis of Big Data trends, the primary challenge isn’t simply to drill into large data volumes for relevant information, but to do it quickly enough so that its value can be maximized. And on this front, the IT industry as a whole is sorely lacking. Fortunately, speeding up the process is not only a function of bigger and better hardware. Improved data preparation and contextual storage practices can go a long way toward making data easier to find, retrieve and analyze, much the same way that wide area networks can be improved through optimization rather than by adding bandwidth.
In short, then, enterprises will need to shore up infrastructure to handle increased volumes of traffic, but as long as that foundation is in place, many of the tools needed to make sense of it all are already available. However, the downside is that this will not be an optional undertaking. As in sports, success in business is usually a matter of inches, and organizations of all stripes are more than willing to invest in substantial infrastructure improvements to gain an edge, even a small one.