The Need for Speed with Big Data

Michael Vizard

With the advent of data management frameworks such as Hadoop, there's has been a general recognition that SQL databases will not be able to serve all out data needs going forward.


Data sets approaching 100 TB exceed the capacity limits that SQL databases can effectively deal with. In addition, many IT organizations are realizing that they have sets of data that need to be processed at speeds that don't require the performance of a SQL database. In those instances, Hadoop offers a low-cost alternative to a SQL database.


But as IT organizations familiarize themselves with these "big data" concepts, they will discover the performance limitations of Hadoop. This will create demand for an approach that can handle big data at high speeds in quickly identifying correlations.

 

Aster Data is targeting an approach to managing big data that runs on top of a massively parallel database. According to Aster Data CEO Mayank Bawa, IT organizations soon will need to sift through massive amounts of data in real time to recognize patterns, such as what customers are buying, and correlate it with other data, such as what similar customers ordered, to make other relevant offers to that customer in real time.


Bawa says other sectors, such as financial services, manufacturing and health care also will be looking for patterns. While some backers of Hadoop are trying to address performance issues, Bawa says it will never match architectures designed to help analyze large amounts of data using either SQL or Hadoop in real time.


Most IT organizations are just starting to wrap their minds around the implications of big data architectures. But the movement itself is much bigger than just Hadoop.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.