Taking the Complexity Out of Storage

Michael Vizard

There have always been two separate storage worlds. One was defined by files and relied heavily on network attached storage (NAS) systems. The other is defined by block-level storage, typically associated with a database, defined by storage area networks(SAN).

Up until recently, NAS and SAN systems required different storage infrastructure. But with the advent of companies such as Scale Computing, we're starting to see a convergence of storage architectures that promises to dramatically cut costs in this area.

Scale Computing offers a range of 1 to 4 TB storage arrays that make uses of a cluster file system that stores data in a block-level format. The end result is that the Scale Computing systems can be deployed simultaneously as an ISCI-based SAN that can support a database and as a NAS system that can be used store files.

According to Scale Computing CEO Jeff Ready, the core idea behind starting Scale Computing was to find a way to bring storage array pricing in line with the actual cost of disk drives. To do that, Scale Computing eschewed the approach of traditional storage vendors that rely heavily on custom processors and firmware in favor of an approach that essentially puts the management of the storage backplane into software running across the network.

The end result is that Scale Computing can deliver about 3 TB of storage for about $11,000. And because the system is software-based, it automatically discovers additional nodes of storage as they are added to the overall system.

But Scale Computing didn't stop there. Ready notes that the company has also dramatically simplified the storage management process using a graphical user interface that makes storage clustering technology accessible to just about any IT organization. To that end, in its latest release, Scale has added support for snapshot replication between distributed storage nodes, giving customers a simple way to do backup across a network of Scale Computing storage arrays.

As IT organizations look to cut storage costs at a time when the growth of data continues to spiral out of control, many IT organizations have been moving towards creating either pools of SAN storage using storage virtualization or relying on clustering software to create pools of storage for files using separate systems from companies such as Dell Equallogic, Hewlett-Packard, EMC, IBM, NetApp and Isilon. The interesting thing about Scale Computing is that what we're really talking about now is creating one giant pool of storage for everything short of high performance applications that need the high-performance bandwidth associated with Fibre-channel storage systems.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

null
null

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.