Right at this moment no one is quite exactly sure how storage systems in the data center are going to exactly evolve. There are a multitude of competing standards attached to a raft of new and legacy technologies, such as Fibre Channel, InfiniBand and Ethernet, that all have to find some way to get along.
More than a few vendors are trying to take advantage of these transitions to push various storage agendas. While that may be an attractive proposition for some IT organizations, a lot of others are leery of dumping investments into existing storage architectures in favor of emerging technologies, especially when it’s clear that existing technologies such as Fibre Channel are going to continue to improve in terms of performance over multiple iterations.
For that reason alone it was interesting to see two recent developments. The first is a new Emulex Connect Architecture that essentially defines a three-year road map for how the company plans to address future generations of storage I/O in the data center.
The second is new silicon from Mellanox Technologies that gives server vendors the option of deploying 36 ports of FDR 56Gb/s InfiniBand or 40 Gigabit Ethernet on the same server. According to John Monson, Mellanox vice president of product marketing, we’re still a long way from being able to run both options simultaneously, but the cost of deploying these technologies should drop because Switch X with Virtual Protocol Technology from Mellanox means that future adapters will be able to be configured to support either InfiniBand or Gigabit Ethernet.
IT organizations should find both of these roadmaps useful because many system vendors rely on products and technologies from Emulex and Mellanox within their own data center offerings. In fact, 80 percent of Emulex’s revenues come via OEM partnerships with major server vendors.
According to Judi Uttal, Emulex senior director of product marketing, what Emulex is trying to create is awareness among end customers of how Fibre Channel and Ethernet will converge around a new quad-port converged fabric controller from Emulex called the Emulex Engine 201 I/O controller.
The XE201 I/O controller supports next-generation PCI Express (PCIe) 3.0 systems. Emulex claims it’s the first converged fabric controller capable of offering OEMs the choice of 16Gb/s Fibre Channel or 10Gb/s Ethernet (10GbE) with protocol support for RDMA over Converged Enhanced Ethernet (RoCE), Fibre Channel over Ethernet (FCoE) and iSCSI, as well as 40Gb/s Ethernet (40GbE) connectivity. Emulex says the XE201 I/O controller supports up to four channels of 8Gb/s Fibre Channel, two channels of 16Gb/s Fibre Channel, four ports of 10GbE, one port of 40Gb/s Ethernet (40GbE) and combinations of the above.
But the real point, says Uttal, is that if IT organizations decide to standardize on this architecture regardless if it comes via server vendors or an Emulex reseller, they can pretty much count on being able to evolve at their own pace. In addition, Emulex has made available a management environment, called the OneCommand Management console, that provides a centralized approach for managing all this evolving chaos.
The good news is that new technologies will make it a lot easier to scale I/O performance in the data center. But the better news is that in the not-too-distant future, IT organizations should be able to absorb those technologies at their own pace in a way that won’t break the IT budget or introduce additional complexity.