10 Critical Myths and Realities of Master Data Management
Prevalent myths surrounding MDM alongside an explanation of the realities.
While the SAP High Performance Analytics Appliance (HANA) initially started out as a specialty engine designed to run analytic applications, HANA has rapidly morphed into a full-blown database platform. In fact, if you include all the database engines that SAP now owns such as Sybase and HANA, the company says it will be the number two supplier of database engines by 2015.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=i
But rather than Sybase being the primary database engine, SAP says that HANA will be the company's flagship database platform.
HANA is based on an in-memory computing architecture that can be configured as either a columnar or row-level database, although columnar is the default option. By running database, middleware and business logic all in memory, SAP is saying that the dependency that IT organizations have on disk-based systems is about to be sharply reduced. With that reduction, the actual size of server systems will shrink along with the amount of power being consumed all across the data center.
SAP plans to move its current data warehouse offering that runs on top of its NetWeaver middleware to the HANA platform in 2012. The company also plans to deliver master data management and data governance services via HANA while also moving a range of business intelligence, predictive analytics, OLAP functionality and enterprise performance management applications to the platform as well.
As part of its overall strategy, SAP has begun to introduce a series of HANA-based accelerators that optimize specific functions within existing SAP applications. The expectation is that as customers gain more exposure to these applications, they will become more comfortable developing applications for HANA.
SAP also plans to add the ability to migrate data between HANA and Hadoop platforms. The basic idea is to treat Hadoop systems as an inexpensive repository of tier 2 and tier 3 data that can be processed and analyzed at high speeds on the HANA platform.
In most cases, adding support for HANA basically means replacing disk-based relational database engines with code that allows the existing application to rely on less code using a faster in-memory database to improve performance. Over time, the SAP HANA architecture means that the line between business logic, databases and middleware will blur as more code moves into memory.
But that may present some significant challenges for developers who want to be able to port applications across different systems. But Sethu M, deputy CTO for SAP Labs, says that as in-memory computing evolves, developers will get more adept at separating process-centric logic from data-centric logic so that applications can move from one in-memory computing architecture to another.
It's pretty clear at this point that the future of enterprise computing is going to be tied closely to in-memory computing as IT organizations look to simultaneously boost application performance while also dramatically reducing the sheer volume of IT infrastructure that needs to be managed. What's not clear is just how long that journey just might take.