When it comes to Big Data applications, a triumvirate of open source technologies has emerged as the most dominant platforms being used. The first is obviously Hadoop, followed closely by Apache Spark in-memory computing clusters and Apache Kafka, a real-time messaging platform.
After already adding support for Hadoop, Syncsort is now extending the reach of its extract, transform and load (ETL) software out to Apache Spark and Kafka.
Tendü Yoğurtçu, general manager of Syncsort’s Big Data business, says that while Hadoop is still the most widely deployed of the three open source platforms, interest in Apache Spark is rising sharply. The reason for this, says Yoğurtçu, is that rather than running Big Data analytics applications in batch mode, many organizations want to be able to run those applications in real time using an in-memory platform that makes it possible for them to blend data from multiple sources.
Taken together, Yoğurtçu says the combination of Hadoop, Spark and Kafka is creating the foundation for building more agile data warehouses. Naturally, the degree to which that combination of technologies will replace traditional data warehouses remains to be seen. The one thing that is for certain is that collectively these technologies will serve as a platform for running many of the analytics applications that formerly ran on data warehouse platforms that cost several orders of magnitude more to acquire, manage and deploy.
In fact, to help facilitate that transition, Syncsort has contributed a connector it created for integrating Spark with mainframe systems, where many of those data warehouses run, to the open source community.
Obviously, IT organizations are not going to replace data warehouse platforms that often contain terabytes, sometimes even petabytes, of data overnight. But in the months and years ahead, a lot of data will be bi-directionally moving between those legacy data warehouse systems and clusters running instances of Hadoop and Spark.