7 Steps to Smarter Integration
Sometimes, change can be worthwhile. The key is knowing what's worth pursuing and what's not.
In-memory computing may not seem directly related to integration, but in some cases, it's actually usurping its role, taking over from ETL batch processes, according to a recent on-demand webinar, "In-Memory Computing: A Discussion on Lifting the Burden of Big Data."
The discussion happened earlier this week and it's recently been made available for free viewing - which is a good thing for me, since I'm horrible about signing up for webinars and then forgetting about them. I really wanted to catch this particular webinar because I keep seeing blurbs about in-memory analytics, but I wasn't certain what it meant in terms of integration.
Nathaniel Rowe of Aberdeen's Enterprise Data Management Practice led the event, which also featured Dave Carlisle of HP, Amit Sinha of SAP HANA, and HP's Erik Lillestolen of SAP BI Appliances, Business Critical Systems.
That allows you to take advantage of the high-end technology available on today's high-speed processors and memory-rich core, he explained.
Why would you need to do that? Well, it comes down to Big Data. Organizations simply have too much data, and they're frustrated with their ability to access it and put it to use. Rowe cited an Aberdeen survey conducted last winter, which found 53 percent of organizations say they don't get information fast enough and 47 percent say too much data is inaccessible and underused within the organization.
The survey also looked at how well data was managed by those who used in-memory analytics versus those who did not. Those who did were able to query data 107 times faster than those who did not. At its slowest, in-memory analytics took five minutes, but averaged a 42-second response time. By comparison, those who did not use have in-memory computing solutions took an average of 75 minutes to query their data.
That's a heck of a difference. What's more, those with in-memory computing power were able to analyze more data - 3.5 times more data - than those without, according to Rowe. End users were happier with the results from their self-service queries, business users even trusted the data more and blue birds cheerfully took out the trash at companies using in-memory computing.
OK, you caught me - I'm kidding about the blue birds. But you get the idea: Data was better, faster and data analysts were happier at companies where they used in-memory analytics.
The webinar included a use case from Nongfu Springs, a China-based water and juice company. The panel also answers about how to build a business case for investing in in-memory computing, what types of organizations can benefit and which market segments should stand up and take notice. That's a lot of ground to cover in an hour and 10 minutes, but given the growing chatter about in-memory, it's well worth it.
If you've caught spring fever and wouldn't mind taking a break this week for a webinar or two, here's a list of intriguing upcoming events.