The Data Conundrum

Michael Vizard
Slide Show

The Business Impact of Big Data

Many business executives want more information than ever, even though they're already drowning in it.

It's pretty clear to everybody at this juncture that we're exponentially dealing with more data than ever. Theoretically, more data should lead to better decision making. But while that's a nice idea in theory, the challenge we face today is that the tools we have available to analyze that data are still relatively immature, and the cost of storing all that data is growing beyond what the typical IT budget can support.

A new survey of 543 C-level and IT executives, conducted by the Kelton Group on behalf of the IT services firm Avanade, finds that email and other traditional enterprise applications still generate the most data and that, not surprisingly, business executives are struggling to cope with all of it.

But Avanade CIO Chris Miller rightly points out that things may get a lot of worse before they get better. With more sensors than ever starting to stream data back to the enterprise, coupled with a general rise in the use of mobile computing devices, there is going to be more data to analyze than ever.

The first order of business for many IT organizations is going to be to figure out how to store it all. Increasingly, we're seeing interest in high performance database systems at the high end to help process all this data, and open source frameworks such as Hadoop for managing it. Hadoop provides a low-cost mechanism for managing large volumes of data, while higher-end database systems provide the raw horsepower needed to analyze sets of data that are of special relevance to the business.

Of course, deciding what data to store where becomes the challenge. More often than not, there is data stored in an SQL database that hasn't been accessed in months or years. That's an expensive place to store data when there is a much more cost-effective option such as Hadoop or an open source database such as MySQL available.

What all this really means is that data management skills and technologies are going to be critical going forward. It's not enough to collect data, the system has to automatically recognize where best to store that data based on its relative value to the business. Once that's determined, then IT organizations have a self-evident framework for applying analytics applications to data that has the highest amount of value to the business.

Only then can we stop inundating business executives with massive amounts of meaningless data in favor of specific pieces of highly relevant information.

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


Add Comment      Leave a comment on this blog post
Nov 13, 2010 7:21 PM Dan Graham Dan Graham  says:


Your opening assumptions are counter to industry trends.  Companies gather and analyze the growing data volumes because the $ per terabyte keeps dropping and analytic subsystems like MPP databases and advanced BI tools provide ROI from 400% to 1000% or more.  

The flood of new data formats heading toward businesses come from RFID, smart devices,  medical records, social networks, sensors, and so on. Organizations that learn to use this new data differentiate themselves, creating a competitive advantage.  The most important point you made is that data should be placed in the right location based on its cost and usage.  To that end, new database technologies are measuring the 'heat' of data usage, placing hot data on fast expensive devices like solid state disks and migrating cold data to large capacity, cheaper devices.


My company -Teradata- recently announced partnerships with Karmasphere and Cloudera to help companies cope with high volume low cost unstructured data using Hadoop.  By combining vast amounts of complex unstructured data (such as emails, telephone calls and web data) with structured data in an Enterprise Data Warehouse, companies gain new insights into customers, sales, inventory, campaigns and more.  Our customers are already doing it so the competitive race has entered yet another lap in the race to squeeze more knowledge and advantages from the data deluge. 

Nov 15, 2010 9:17 AM m ellard m ellard  says:

It seems like the biggest issue in looking at all the different types of data and data sources is separating the wheat from the chaff - finding the insights that are in there.

I agree, this gets harder with the data flooding in faster and faster from more and more places. However, with a disciplined approach, some serious de-duping happens, and systems sync up and all that data becomes a real benefit.

This white paper lays out the steps to taming the data tsunami ...  



Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.

Subscribe Daily Edge Newsletters

Sign up now and get the best business technology insights direct to your inbox.