MicroStrategy Software Update Promotes Insight-as-a-Service

Loraine Lawson
Slide Show

7 Steps to Smarter Integration

Sometimes, change can be worthwhile. The key is knowing what's worth pursuing and what's not.

In-memory computing may not seem directly related to integration, but in some cases, it's actually usurping its role, taking over from ETL batch processes, according to a recent on-demand webinar, "In-Memory Computing: A Discussion on Lifting the Burden of Big Data."

 

The discussion happened earlier this week and it's recently been made available for free viewing - which is a good thing for me, since I'm horrible about signing up for webinars and then forgetting about them. I really wanted to catch this particular webinar because I keep seeing blurbs about in-memory analytics, but I wasn't certain what it meant in terms of integration.

 

Nathaniel Rowe of Aberdeen's Enterprise Data Management Practice led the event, which also featured Dave Carlisle of HP, Amit Sinha of SAP HANA, and HP's Erik Lillestolen of SAP BI Appliances, Business Critical Systems.

 


Rowe opened with a simple explanation of in-memory computing. Basically, he said, it's based on the philosophy of moving data as close to the processor as possible to eliminate bottlenecks, including slowdowns in the network, the data warehouse, the cloud, the application you're using to access the data - even the disk and input/out device on which the data is stored. You take all of that out of the equation, he added, by loading the data directly into the random access memory of the server.

 

That allows you to take advantage of the high-end technology available on today's high-speed processors and memory-rich core, he explained.

 

Why would you need to do that? Well, it comes down to Big Data. Organizations simply have too much data, and they're frustrated with their ability to access it and put it to use. Rowe cited an Aberdeen survey conducted last winter, which found 53 percent of organizations say they don't get information fast enough and 47 percent say too much data is inaccessible and underused within the organization.

 

The survey also looked at how well data was managed by those who used in-memory analytics versus those who did not. Those who did were able to query data 107 times faster than those who did not. At its slowest, in-memory analytics took five minutes, but averaged a 42-second response time. By comparison, those who did not use have in-memory computing solutions took an average of 75 minutes to query their data.

 

That's a heck of a difference. What's more, those with in-memory computing power were able to analyze more data - 3.5 times more data - than those without, according to Rowe. End users were happier with the results from their self-service queries, business users even trusted the data more and blue birds cheerfully took out the trash at companies using in-memory computing.

 

OK, you caught me - I'm kidding about the blue birds. But you get the idea: Data was better, faster and data analysts were happier at companies where they used in-memory analytics.

 

The webinar included a use case from Nongfu Springs, a China-based water and juice company. The panel also answers about how to build a business case for investing in in-memory computing, what types of organizations can benefit and which market segments should stand up and take notice. That's a lot of ground to cover in an hour and 10 minutes, but given the growing chatter about in-memory, it's well worth it.

 

If you've caught spring fever and wouldn't mind taking a break this week for a webinar or two, here's a list of intriguing upcoming events.

 

  • This week, the Bloor Group's Briefing Room will discuss "Enabling Business Intelligence and Analytics with Right-time Information." That's a pretty long title, I know, but that about sums it up. It'll feature David Loshin, a data expert and the president of Knowledge Integrity. The vendor guest will be Attunity, which offers real-time data integration and event data capture solutions. This webcast will be Tuesday, March 27 at 4 p.m. ET and you can follow it on twitter using the hashtag #BriefR.
  • The Open Group will hold a free webinar on the SOA Reference Architecture Wednesday, March 28, at 12 a.m. EST. You'll need to pre-register. Chris Harding of the Open Group will be joined by speakers from IBM and Applied Technology Solutions to discuss the finished, published version of the Open Group SOA Reference Architecture. They'll also discuss how to use it.
  • Interested in continuous integration? SD Times is hosting a free webinar Thursday, March 29, at 2 p.m. EDT, on "Taking Enterprise CI to the Next Level." This appears to be a solution-specific event, since representatives from Electric Cloud and Perforce will cover how to use their solutions together to support enterprise-wide continuous integration. Editor-in-Chief of SD Times David Rubinstein will moderate.
  • One of my favorite webinar sources, TDWI, will hold a discussion on "Big Data and Your Data Warehouse" next Thursday, April 5, at 9 a.m. PT. It'll feature Philip Russom, who will discuss what you'll need in terms of tools, infrastructure and processes in order to use Big Data using your existing data warehouse.


Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 
Resource centers

Business Intelligence

Business performance information for strategic and operational decision-making

SOA

SOA uses interoperable services grouped around business processes to ease data integration

Data Warehousing

Data warehousing helps companies make sense of their operational data


Thanks for your registration, follow us on our social networks to keep up-to-date