Arthur Cole spoke with Kam Eshghi, senior director of marketing, Enterprise Computing Division, Integrated Device Technology. Initial deployments of enterprise-class solid-state disks (SSDs) featured primarily SAS and SATA interfaces, which only made sense considering they were intended to replace the hard-disk drives (HDDs) already in place. Lately though, a new generation of SSDs are sporting PCIe connectivity, which should make the devices even faster and more flexible for high-throughput environments. Integrated Device Technology (IDT) is one company hoping to use Intel's Non-Volatile Memory Host Controller Interface (NVMHCI ) as a standard around which to build a common PCIe architecture for SSDs. Eshghi lays out the strategy.
Cole: There have been a number of attempts to standardize around the PCIe bus. What makes NVMHCI different?
"PCIe will emerge as the second dominant interface for the enterprise, offering a significant boost to performance as compared to other interfaces and leading on cost-per-IOPS."
Eshghi: PCIe-based SSDs offer extremely high performance at lower latency compared to SAS/SATA SSDs. For that reason, they are very attractive for caching and high-performance drive usage models. However, adoption of PCIe SSDs has been inhibited by the lack of a standard controller interface and register programming interface. Enterprise NVMHCI provides a standard programming interface for PCIe SSDs that abstracts out all Flash management and is optimized for both cache and drive usage models in enterprise applications. This allows PCIe SSD vendors to focus on building great SSDs, while OS vendors deliver drivers for all PCIe SSDs. That is, PCIe SSD suppliers no longer need to provide a proprietary driver for every OS. This standardization dramatically simplifies OEM qualification of PCIe SSDs and drives adoption. The Enterprise NVMHCI standard is driven by Intel, Dell, Microsoft, IDT, and 50-plus other companies, so there is strong industry support.
Cole: Does it seem likely that PCIe can ever be extended beyond the server backplane onto the network itself? Would it provide a viable alternative to Fibre Channel or even Ethernet itself?
Eshghi: There have been some efforts around expanding PCIe beyond its current intra-chassis home into the network, including PCIe protocol tunneling, usually through Ethernet, and PCIe cabling. While there are some advantages to be gained with PCIe --either native or tunneled through another protocol -- in a virtualized local network environment with IO sharing, we still see this as being a very niche usage model. The PCI SIG has published a specification for PCIe cabling to extend PCIe as a native inter-chassis protocol, but again the usage model is very niche. One other area where PCIe is being extended outside of the box is in wireless modems where there are some recent efforts to extend PCIe over a network to create smart wireless docking stations.
Cole: We're still seeing SSDs coming out with SAS and SATA interfaces. Will those go the way of the dinosaur soon?
Eshghi: We expect that over the next three years SAS and PCIe will dominate the enterprise SSD market. SAS is not going away because Enterprise HDDs will continue to lead on cost-per-GB, and the same SAS ports will be used for SSDs. PCIe will emerge as the second dominant interface for the enterprise, offering a significant boost to performance as compared to other interfaces and leading on cost-per-IOPS. We expect SATA to be dominant for PC applications, not enterprise, as it lacks enterprise-class features.