If these new acronyms have not yet appeared on your list of buzzwords, it is time you started following these new developments and the related protocols. This has all come about as a result of the widespread adoption of virtualization and, indirectly, blade servers (especially those with internalized switches). As servers (and storage) moved from having a physical NICs with a MAC address to a virtual MAC, the vendors have all worked together to develop a structure and strategy to improve and manage the communication channel performance.
To back up a bit, first there was the virtual server, which meant that a physical server could 'exist' with other virtual servers on one or more physical servers or blade servers. Of course, a virtual server still needs to communicate, even though each virtual server had no real NIC card. Hence, the advent of the 'virtual' NIC card and virtual Ethernet.
It should be noted that all these protocols are related to bridging, not routing. In case you are wondering when was the last time anyone used bridging instead of routing, you might not remember that 'bridging' is the basic function that we take for granted when we use a typical 'Ethernet switch' (non-routing). Bridging does not really understand IP addresses and decides where to send packets based on the MAC address of the source and destination. Of course, since a virtual server has no 'real' MAC address, the virtualization software assigns and manages the virtual MACs. One of purposes of these new protocols is to have the switch hardware take the load off the virtualization software to improve the overall system performance.
So how will this affect your data center network? To begin with, the Ethernet switches, both those incorporated within the blade server chassis as well as the traditional external units, need to be able to understand, learn and update their MAC tables using these new, yet-to-be-finalized, virtualized protocols.
In addition, this could change the way a structure cabling system is designed within the data center; instead of running all cabling from each rack back to a central patch panel (and core switches), there might be clusters of racks wherein the racks are inter-cabled together (via patch panels in larger clusters) to the central point within the cluster. Then only the cluster switch (edge) is tied back to the core. This is already beginning to happen with SAN clusters. Of course, the size and scope of the network, the number of physical and virtual servers, and the data center itself will influence many of these decisions.
The various vendors have different proposals on the table, each trying to promote their offering into the final standard. Like many other technologies put forth by competing vendors, they eventually will be blended and ratified into an IEEE standard. In the meantime, before making any major investments in Ethernet switching equipment or large-scale data center structured cabling plants, consider following this and other standards as they near finalization.