Changing the Way You Purchase Storage
Ensure that IT has the flexibility to build and efficiently run a shared infrastructure.
IBM today announced that it has set a new record with the scanning of 10 billion files in 43 minutes, which represents a 37x increase over the old mark of 1 billion files in three hours. To accomplish that task, IBM deployed its General Parallel File System (GPFS) on top of flash memory arrays from Violin Memory.
Bruce Hillsberg, director of storage systems for IBM Research, says this is significant in the era of Big Data because every backup and replication task associated with data management is dependent on scanning. As the amount of data that needs to be managed continues to grow exponentially, IT organizations are going to have to rely more on the ability to scan that data in memory to effectively manage it.
This isn't the only significant event involving memory lately. IBM also recently unveiled a new kind of phase-change memory technology that promises to be 100 times faster than flash. And early in 2012, the company plans to load up 7.5 TB of flash memory into its XIV storage systems.
IBM is hardly alone in stressing the growing importance of memory. Microsoft has big plans for in-memory computing as it relates to SQL Server and SAP is working with a host of vendors to develop a new generation of servers based on in-memory computing.
What all this means going forward is that metrics we use to measure application performance today will soon be obsolete thanks largely to the advent of relatively inexpensive memory and, further on down the road, advances in memory technology.
The implications of all that memory in terms of how applications are constructed are just as equally profound. Access to plentiful amounts of memory that should foster the building of composite applications where modules interact with each other in real time is not all that far away.