Thin provisioning has proven to be a lot trickier, and more controversial, than many people thought. While the concept of centrally launching a virtual PC, complete with operating system, from a remote dumb terminal is pretty straightforward, there is growing discord as to how to do it right.
The most recent row is over HP's concept of Dynamic Capacity Management (DCM), which the company claims is its own flavor of thin provisioning. Basically, the company is using the new virtual disk service (VDS) in Windows Server 2008 as a volume shrink manager to boost capacity in the mid-range StorageWorks platform. Some argue, however, that this approach doesn't really fly because the central operating system is merely shrinking and expanding file sizes based on application demands, rather than partitioning new desktops on unused disk space.
HP isn't backing down. The company just released the StorageWorks Enterprise Virtual Array featuring DCM, which the company claims can improve hard drive utilization and boost power efficiency by 45 percent.
All this noise comes barely a month after a spat between DataCore and VMware over who invented thin provisioning in the first place. DataCore shipped its Dynamic Virtual Capacity system in 2001, although VMware had its VMware Workstation several years before that. But 3Par says it delivered the first "true" thin provisioning platform in 2003 ... You get the idea.
Since the goal of thin provisioning is to improve efficiency and lower costs, it probably doesn't make much difference to the workaday enterprise manager how the network drums up additional disk space. But there are a number of pitfalls to be aware of, regardless of which approach you take. Byte & Switch offers a number of helpful tips on getting it right by making sure that the system you invest in has the chops to cut it in your environment.