While the SAP HANA in-memory computing platform may garner all the SAP headlines of late, it may turn out that the SAP Sybase relational database is now more relevant than ever.
SAP today unveiled version 16 of the SAP Sybase Adaptive Server Enterprise (ASE), which can now linearly scale in a way that supports millions of transactions per minute across databases that span multiple terabytes.
Dan Lahl, vice president of product marketing for SAP, says that SAP was able to drive the performance of SAP Sybase ASE by providing support for more granular lock-management capabilities and by relaxing query limits.
The performance abilities of SAP Sybase ASE, says Lahl, are more relevant thanks to the rise of mobile computing applications that are driving the number of transactions that need to be processed exponentially higher. A significant percentage of those transactions will be processed on the SAP HANA platform, but Lahl notes that an almost equally large percentage of those transactions are just as likely to be executed against legacy applications running on traditional relational databases.
As arguably the fastest relational database for processing transactions, Lahl says SAP Sybase ASE complements SAP HANA—especially now that SAP has developed an SAP Data Fabric that leverages data virtualization to unify the two database architectures.
At the moment, the SAP Data Fabric works primarily within data warehouse applications. But Lahl says it’s only a matter of time before SAP exposes both databases under a unified programming model. The two database systems can already be connected via ODBC, but additional application programming interfaces are forthcoming, says Lahl.
As that architecture evolves, Lahl says many IT organizations are going to standardize on the SAP HANA in-memory database and the SAP Sybase ASE at the expense of rival relational databases, such as Oracle or IBM DB2.
The end result is that from a database perspective, it’s likely to be a hybrid world for the foreseeable future. The only real challenge at this point is figuring out what the pace of migration to an in-memory database architecture really needs to be based on the total cost of making that migration versus the cost involved in actually making the switch.