Arthur Cole spoke with Ramana Jonnala, CEO/founder, Convergent.io.
Software-defined networking has caught on among the VMwares and Ciscos of the world, but some firms are already looking to fine-tune that definition with developments like software-defined storage networking. One of them is Convergent.io, which aims to foster greater scale-out connectivity in storage environments by severing the link between data and underlying hardware. As CEO Ramana Jonnala describes it, enterprises have a choice of paying for expensive storage infrastructure, outsourcing it to utility providers, or building your own low-cost, scalable architecture.
Cole: Industry talk is starting to focus on software-defined networking, and now Convergent.io is pursuing software-defined storage networking. What, exactly, have you developed, and how does it differ from current storage virtualization and storage hypervisor systems?
Jonnala: We believe storage has actually been software defined for 20 years. The big array vendors build value in their software but tie that storage functionality to the hardware that stores the data. Storage of data at this point is basically a solved problem. There are a lot of different options out there: host-side flash, NAS/SAN boxes, Amazon S3, Glacier, etc. The big challenge for customers is in the connection between their applications and their data. Host-side flash is great, except when you want to share it, or when it fails. Amazon's Glacier is durable, but not the right option if you want a read to complete in the next few seconds, let alone microseconds.
Storage connectivity has been hamstrung for a long time. From fast, proprietary interconnects like Infiniband and Fibre Channel, we moved to commodity interconnect with Ethernet that was slower and hard to share as a storage network. Suddenly, in 2012, we are starting to have really fast Ethernets at a low cost — cheap fast switches and chipset-based 10 Gb NICs on servers. SDN pushes this a step further by allowing us to finally, for the first time in 20 years, truly innovate in storage networking on top of commodity hardware. The landscape right now is littered with flash and flash/hard disk drive hybrid appliances promising more performance and lower costs than traditional arrays, but these are just variations on storage media that still sit as boxes on the end of the wire far away from the application.
As part of the founding team for XenSource that built the Xen hypervisor used by Amazon EC2, we decided to start a company, Convergent.io, to tackle the challenge of scale-out storage connectivity. We're building a new approach that decouples storage functionality from expensive big box array hardware and centralizes it in the network layer to deliver the economics of commodity storage hardware with the power of high-performance scale-out.
When we look at how flash is being inserted into appliances and arrays right now, it just doesn't make sense when you think about the bandwidth and controller limitations that bottleneck the raw performance of this flash. That brought us to the realization that this flash needed to be aligned with the networking layer itself for its full potential to be unlocked and the economics to work for the average virtualized datacenter.
We are developing software-defined storage networking (SDSN) technology that combines storage intelligence with networking for maximum performance of flash storage that doesn't sit behind an array's controller. Customers will be able to dynamically add more capacity without worrying about performance constraints or re-architecting their storage environments as they usually would with traditional big box arrays that are static. We will reveal more next year about the product when we are further along, but we're very excited to be developing a solution that is going to deliver a price/performance model that is dramatically different from everything else on the market.
Cole: With hardware platforms giving way to logical abstraction for things like data processing, storage and networking, is there a danger that the enterprise will encounter compatibility problems as these various software layers attempt to communicate with one another? How do you guard against that?
Jonnala: This is a great observation. Within the networking world, OpenFlow is a great example of this sort of tension. There is broad support for SDN and the OpenFlow specification provides a framework for all the players to standardize on. However, as with any new standard, not everyone wants to play − and not everyone's implementation will get the semantics right. Still, the standard provides a framework for a lot of the community to productively move forward.
As a contrast, look at how storage is currently evolving in the data center: Netapp and EMC both have competing solutions and partnerships to enable host-side flash as a performance enhancement on top of their arrays. In both cases, these solutions basically grow your array into a vertical silo that has tentacles on all of your servers. Enterprise storage remains a morass of slow-moving, piece-wise functionality.
Our background is in open source and open standards. Xen was successful because it represented a point of collaboration between many organizations that wanted to see a flexible, mature x86 hypervisor. It continues to have relevance in large-scale hosting environments because of this. Just as OpenFlow provided a rally point for software-defined networking, Convergent is committed to advancing an open and accessible set of interfaces for storage connectivity and data access. Expect to hear more from us on this front over the next year.
Cole: In the end, will this fuel the movement toward more utility-style computing as small and medium firms shift their resources to hosted solutions and the cloud?
Jonnala: As we see cloud companies like Amazon, Google, Facebook, etc., scale their environments with commodity hardware in which functionality is housed in a separate software layer that they can deploy, configure and optimize to their needs, the enterprise world is wondering why they should continue paying the premiums to the big box vendors for solutions that don't offer the flexibility and scale-out needed for the dynamics of a virtualized datacenter.
Dropbox and Amazon's S3 are storage utilities with well-run, scale-out storage environments that charge based on consumption. They achieve an economy of scale based on hundreds of thousands of active customers and require large operations teams to deliver on reliability, but they do not come close to the on-site performance of enterprise storage options.
So as a small- to medium-sized firm, your choices are to run your stuff in the cloud and trust that they aren't going to screw up the security of your private data, and deal with mediocre performance from a utility storage service; or run your stuff in the enterprise and pay through the teeth for more reasonable storage performance by buying that storage in units of terabytes and tens of thousands of IOPS and committing to it on a three-year term.
SDNs and software-defined storage networking have the promise of allowing a compromise between these positions. Emerging commodity switching and flash capabilities mean that utility-based storage services can be simplified and driven to higher performance. Moreover, they will be made to truly scale out in a way that array-based solutions have never been able to do.
We're excited about software-defined storage networking because we think it is going to let admins think about their storage in the same way they think about their networks — as a resource that can be scaled as necessary and replaced as needed. This movement toward the SDC with technologies like the one we're developing will help customers both close the gap between the level of flexibility and performance they can build in their own private clouds, as well as provide avenues to more easily incorporate public cloud solutions into their IT strategies for greater overall efficiency.