A few weeks ago, I ran a little blurb about 3Leaf Systems' recent deal with Intel to license the Intel QuickPath interconnect system for its line of appliance-based (soon-to-be silicon-based) virtual I/O technology.
The company has already licensed AMD HyperTransport, meaning the company now has a lock on providing virtual networking tools for the entire x86 market.
But that's only the beginning of the story. In talking with 3Leaf's CEO, B.V. Jagadeesh, it turns out that the company is planning to be a major disruptive force in enterprise networking and isn't shy about its intentions to step on some pretty big toes -- particularly in the NIC and HBA fields -- to produce a more flexible, dynamic data center.
The crux of 3Leaf's plans lies in the fact that server clusters can deliver the same kind of processing power as expensive mainframes, and in a more scalable and flexible manner to boot. The only stumbling block is the cost and complexity of the network fabrics that tie all those servers together. Wouldn't it be great to have all those CPUs communicating directly, using commodity fabric technology (and drivers) and off-the-shelf hardware?
Enter QuickPath and HyperTransport. 3Leaf has figured out a way to extend those two interconnects beyond the single-server environment so that their resources (CPU, memory and I/O) can be pooled together and within a few seconds dynamically allocated to wherever they are needed.
"The key pieces of what we do are not new," Jagadeesh said. "They've been in existence since the 1970s for mainframe-class systems. What we bring that's new is that we combine all of those concepts and apply them to x86 servers to create a virtual compute environment. By virtualizing the CPU, memory and I/O, we can bring mainframe-class features in terms of scalability, flexibility and reliability into the server architecture. That is translated into dramatic costs savings and a highly optimized datacenter moving forward."
Big deal? Isn't that what fabrics are all about? Perhaps, but Jagadeesh says 3Leaf can do it using 80 percent fewer SAN and LAN switch ports, 85 percent fewer NICs and HBAs, and 70 percent less cabling. Ask any CIO if they'd like to shave 85 percent off their HBA and NIC capital budget, and they'd probably tell you that, yeah, it's a big deal.
The company has built its technology into the V8000 virtual server, which currently exists in off-the-shelf appliance form. The next step is an all-silicon version that would sit on the motherboard right next to the Opteron or Nehalem processors. That's when the concept of resource pooling really begins to take off as the individual servers in the cluster will communicate directly, rather than through a separate appliance.
It's basically the clustered version of the old mainframe backplane, but without an expensive and complex network architecture.
Sounds like a big deal to me, too.