More

    IBM Unfurls Global Project Elastic Storage Initiative

    Slide Show

    Ten Things You Need to Know About Software-Defined Storage

    In theory at least, cloud computing is pretty much defined by elasticity. The whole idea of building a cloud, after all, is to make IT infrastructure resources available on demand. IBM announced today that it is applying that concept to storage on a global scale regardless of the format in which it is stored or the devices used to house it.

    Later this year, IBM will deliver a truly elastic data storage capability, code-named Project Elastic Storage, on top of the IBM SoftLayer cloud computing platform. Built on top of the IBM General Parallel File System (GPFS) technology that IBM has been using in high-performance computing (HPC) environments for decades, Project Elastic Storage turns GPFS into a cloud service capable of supporting file, block and object-based storage services.

    Bernie Spang, IBM vice president of strategy for software-defined environments, says that besides providing a single global name space for both IBM and third-party storage devices, the OpenStack-compatible Project Elastic Storage initiative will also provide distributed caching services, encryption, and the ability to remotely delete files in the event that a device goes missing. In addition to OpenStack Cinder and Swift support, Elastic Storage will also support other open APIs such as POSIX and Hadoop.

    In effect, Project Elastic Storage represents an IBM commitment to automate the process of creating tiers of storage across hybrid cloud computing environments, which will be completely defined by software. As such, Spang says not only are the economics of storage being fundamentally changed for the better, but the days when storage administrators had to manually provision storage systems are rapidly coming to a close.

    Spang also says that HPC environments have been wrestling with managing IT infrastructure at scale for decades. Many of the issues that cloud computing environments have are similar in nature; it’s just that they tend be much more distributed. It makes sense, however, to apply technologies that have already been proven in HPC environments to solve cloud computing challenges, says Spang. In this case, IBM is making use of HPC technologies to manage storage at a much higher level of abstraction in the age of the cloud.

    In the meantime, as cloud computing continues to evolve, it’s clear that a fundamental realignment of enterprise computing strategies is at hand. Instead of provisioning IT infrastructure to support both the peak performance of application workloads along with having every piece of data stored on premise, IT organizations will look to strike a balance between public cloud services and their own internal IT infrastructure resources.

    Mike Vizard
    Mike Vizard
    Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles