Thin provisioning is one of the most effective ways to get a handle on storage over-provisioning. By allowing applications to consume only the storage resources needed to write blocks or block groups to a particular volume, thin provisioning streamlines utilization so you don't end up buying and maintaining more storage than you actually need.
But as the technology heads into the enterprise mainstream, users are starting to encounter the downside of thin provisioning. Depending on how it's used and what sort of data environment is in place, thin provisioning can actually cause more problems than it solves.
For those with particularly heavy data requirements, one word of warning came on Byte and Switch a few days ago. While most systems allow managers to set limits on the amount of storage a given application may use, virtual storage pools that are filled too quickly or unexpectedly can be overloaded before the manager can provision more physical storage. The resulting disk and application errors can slow things to a crawl.
The added complexity required to maintain thin environments may, in itself, begin to hamper functionality. Management of a thin-provisioned storage network requires a fair amount of intelligence and logic built in. And if a failure does occur, it will likely take longer to restore than traditional hardware systems.
It also pays to realize that not all thin provisioning systems are created equal. Varying features and functionality can either help or hinder operations depending on the surrounding environment. 3PAR's Geoff Hough offers a good rundown here on things like large vs. small allocation, reserved vs. reservationless implementation and manual vs. autonomic configuration.
There's no question that thin provisioning is an extremely effective storage management tool, particularly when virtualization is placing an ever greater burden on available resources. But like any revolutionary technology, it has a few quirks under the hood.