Seven Best Practices for Virtualization
Virtualization is taking IT to new horizons from which whole new sets of opportunities are coming into view.
Enterprises have been quick to jump into virtualization as the primary means to streamline server architectures and improve their ability to match resource footprints with data loads.
A bit more problematic, however, is the need to then integrate new virtual environments with legacy data center infrastructure, namely storage and networking. It turns out that much of the efficiency and scalability that virtualization creates in the server farm comes at the expense of the wider data environment. That's primarily the reason why network and storage developers have been working overtime recrafting their platforms to make them more virtual-friendly.
Dell, for one, has been working closely with VMware on the Dell Fluid Data portfolio, which delivers a mix of hardware and software configurations designed to update the company's storage platforms for an increasingly virtual universe. The company recently released a pair of EqualLogic arrays, the PS4100 and PS6100, aimed at mid-level environments up to 72 TB. Those are matched by new integration tools for VMware 3.1, as well as a new replication adapter that more closely aligns the Compellent automation platform with vCenter Site Recovery Manager, a move intended to improve migration and disaster recovery. At the same time, Dell now enables management of multiple PowerVault arrays from a single vCenter instance.
New management techniques may help integrate virtualization into legacy environments, but at some point new storage architectures will be necessary as the need to go virtual starts to impact a wider range of applications. That's given startups like Astute Networks a foothold in the enterprise with high-speed Flash technology. The company's ViSX G3 appliance offers a quick shot of about 80,000 sustained IOPS in support of high random read/write applications like databases, ERP and email. The unit is designed to work with existing SAN and NAS environments, appearing as a standard iSCSI device that offloads VMs from host servers using VMware's vMotion platform.
Once you've decided to deploy Flash technology, however, the question becomes where to place it. According to newcomer Nutanix, storage networking is only needed to accommodate large, disk-based storage arrays, so it will likely become an anachronism before too long. The company has released a converged platform called Complete Cluster that melds computing and storage resources into a single tier, doing away with the SAN and all the hassles that go with it. Clusters consist of 2 U units containing four 8-core x86 nodes tied to 192 GB of RAM and either 1.2 TB of SATA SSD, 1.3 TB of Fusion-io or 20 TB of normal SATA for environments that value capacity over speed. Each block runs about $75,000 and can be up and running within 30 minutes.
And despite its ties to storage behemoth EMC, VMware is starting to embrace the idea of local storage. The new vSphere Storage Appliance allows local storage to appear as an NFS datastore, essentially providing SAN functionality to the local tier using a pair of ESXi hosts. Note, however, that the system cannot handle a full vCenter deployment, so it doesn't function well as a standalone environment.
In the end, simplicity wins out over engineering prowess.