More

    Is Smart Hardware Still Relevant in Software-Defined Storage?

    Slide Show

    Ten Things You Need to Know About Software-Defined Storage

    The numbers are stark for enterprise executives: Data loads are set to increase by perhaps 50 percent in the coming year, while the typical IT budget is only slated for a 5 percent bump.

    This disconnect will be felt most keenly in storage, where even the promise of software-defined infrastructure cannot obscure the fact that every bit of data needs to find a home somewhere. That forces most organizations to confront two choices: Increase the capex budget to deploy additional storage in-house or lock the enterprise into potentially significant long-term opex costs by pushing data onto the cloud.

    The good news, though, is that these choices may not be as stark as they seem. Indeed, with the advent of software-defined storage (SDS), organizations are no longer facing the same complexity and functional rigidity that have long characterized even the most advanced storage technologies. In fact, with a new all-powerful control plane, storage capabilities are likely to increase even while physical layer costs are driven down through commodity platforms.

    Dell’s Robin Kuepers, for example, sees a rapid changeover in the works from today’s application-centric storage infrastructure to a more dynamic, and ultimately lower cost, environment. The company has teamed up with Nexenta to incorporate software-defined functionality into Flash storage platforms like the Dell Compellent appliance. In this way, the company hopes to more closely match storage infrastructure with overall data requirements as opposed to the current practice of building costly, and often redundant, storage capacity in order to launch the latest IT-driven application, which is not only wasteful but woefully slow compared to the rapid-fire deployment capabilities of the cloud.

    Part of this bargain, though, is that much of the functionality found in typical SAN hardware will be moved to the controller, which can be an unnerving change for IT executives who’ve spent their careers caring for traditional storage environments. Leading vendors, however, see the writing on the wall, which is why companies like EMC are actively looking to bury their own storage array technologies under multiple layers of software. The company’s ViPR platform, for instance, is intended to ride herd over all physical storage in the data center, and will likely become the primary means to deliver functions of the new Storage Resource Management (SRM) stack as sets of services rather than in traditional storage management fashion.

    Of course, this will require a high degree of federation among storage components, which inevitably leads to the open vs. proprietary question. Red Hat is clearly on the open side (natch) and is busy building an ecosystem of like-minded suppliers to support standards like the object-based OpenStack Swift system. The company is offering certifications for its Online Partner Enablement Network, plus insider access to key technologies from blue chip partners like AWS, SuperMicro and Intel. Ultimately, the goal is to ensure that enterprises that wish to pursue open platforms for new software-defined storage infrastructure will have a range of hardware options in the channel.

    It would be a mistake, however, to confuse commodity hardware with dumb hardware, says Storage Switzerland’s George Crump. Indeed, many functions could provide high value on the hardware layer even in software-defined infrastructure. Functions that support Flash’s zero-latency capabilities, for example, can be a key differentiator among platforms, not to mention basic optimization tools like deduplication and compression. As well, there are opportunities for hardware to even outperform the capabilities of today’s SDS platforms in key areas like thin provisioning, snapshots and replication. Ironically, it may turn out that much of this grunt work is in fact relegated to abstracted software layers while hardware focuses on more critical tasks like automated tiering and caching.

    Software-defined storage, of course, is a requirement for the software-defined data center (SDDC), so it is likely that many organizations will deploy it in one fashion or another (again, that whole capacity vs. budget thing). But that doesn’t mean that all storage or all functionality will have to migrate to a layered control plane, at least not at first.

    SDS will likely reduce storage costs to reasonable levels for the vast majority of enterprise workloads, but when it comes to optimizing storage for critical functions, sometimes a hardware-defined approach will be best.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles