IBM Acquisition Federates Massive Amounts of Distributed Data

Michael Vizard
Slide Show

Big Data Disruptions Can Be Tamed with Enterprise Architecture

With the acquisition today of Vivisimo, a leading provider of discovery and navigation software, IBM is filling a gap in its data management portfolio that has become more glaring with the rise of Big Data.

Big Data frameworks such as Hadoop have significantly reduced the cost of storing massive amounts of data. Anjul Bhambri, IBM vice president of Big Data products, says that Vivisimo provides IBM with a set of tools that allows IT organizations to visualize structured and unstructured data regardless of the repository it happens to be residing in at any given moment.

Instead of moving massive amounts of data from an SAP or Hadoop application, Bhambri says Vivisimo provides a set of visual tools for correlating and analyzing that data where it's stored in the enterprise.

Bhambri says that Vivisimo provides a more federated approach to managing data in the enterprise that scales better in distributed environments than other data management platforms such as Autonomy from Hewlett-Packard or the Endeca platform acquired by Oracle.


In all three cases it's apparent that the major vendors now see the management of massive amounts of data at scale as the next big thing in the enterprise. IBM is taking that one step further by also announcing today that IBM is also expanding its Big Data platform to run on third-party distributions of Apache Hadoop, beginning with Cloudera. Based on Apache Hadoop, IBM's Big Data platform combines analytics software with Hadoop to analyze petabytes of data.


Bhambri says it's clear that trying to manage data across a series of silos in the enterprise isn't going to be practical. What's required is a layer of software that provides a single point of access to all enterprise repositories.

To one degree or another, IBM, HP, Oracle and others such as Attivio are trying to address a data management crisis that has been building for years. But with the advent of tools such as Hadoop, that crisis is moving quickly up the list of priorities that IT organizations need to address. After all, there's not much point to collecting massive amounts of data that nobody can actually use.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.




Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.