Better Storage for a Virtual World

Arthur Cole

Arthur Cole spoke with Augie Gonzalez, director of product marketing, DataCore.

Many organizations are comfortable virtualizing infrastructure for backend or business process applications, but balk when it comes to top-tier transaction or mission-critical ones. Part of the reason is that virtual infrastructure has a habit of overwhelming I/O resources, putting crucial, time-sensitive data in limbo. But while this may seem like a networking issue, it is in fact a storage problem as well, according to DataCore's Augie Gonzalez. But through intelligent software, advanced caching, Flash memory and improved SAN technology, the storage side of the house is quickly catching up to servers and networking.

Cole: Enterprises are still hesitant to trust their virtual infrastructure with mission-critical applications. What needs to happen to overcome this fear?

Gonzalez: Education and proof points. Enterprises need to see first-hand examples of colleagues who confronted the roadblocks trying to virtualize their Tier 1 apps, and overcame them without spending exorbitant money on the solutions. Traditional thinking is that virtualization will bottleneck and slow down critical business applications, yet the productivity and cost-saving advantages are driving the need to move to virtualization. The move is inevitable, and intelligent virtualization software can overcome the performance dilemma by harnessing the latest CPU and memory technologies to dramatically increase performance.

IT has to be more forthcoming and public about their successes. Many are apprehensive to do so, fearing it discloses trade secrets with competitors. We at DataCore have been fortunate in that many of our customers are openly discussing their approach to supporting mission-critical applications in virtual environments. We need to get the word out on the enterprises that are successfully virtualizing their Tier 1 applications. I say, share a little — learn a lot.  

Cole: Storage is often mentioned as a major limiting factor when extending virtual infrastructure. Is it simply a matter of provisioning more storage, or do we need to rethink some of our assumptions as to how storage should function in virtual environments?

Gonzalez: If it was as simple as provisioning a little more storage, we wouldn’t need to have this conversation.

The root causes revolve around contention and collisions — collisions for shared storage resources and contention between unlike access patterns over shared channels. In addition, we are still thinking about storage as if it is only disk drives; whereas from an application standpoint, it is all about how fast we can get the data or transactions processed. Managing and optimizing storage resources in a virtual world has to deal not only with disk drives, but with caches, fast Flash memory storage, virtual and physical storage networking and access to data stored remotely or in the cloud. These, after all, were the reasons we kept business-critical apps isolated in the first place.

When you dissect these apps, you discover a spectrum of complex I/O behaviors — some short and bursty, others lengthy and random, and everything in between. The art of tuning each data flow has long fueled the craft of database consultancies. Virtualization alone doesn’t change this, however, consolidation of virtualized apps does. When everyone can’t have a private lane, smashups become very frequent. And when a common resource suffers an outage, everyone stalls.

The attention now shifts to dynamically sensing I/O traffic across the virtualized infrastructure and adjusting to it — rapidly making and breaking connections between workloads and purpose-built hardware without manual intervention, caching to avoid using the lanes unnecessarily, and redirecting traffic away from out-of-service components to units that can take their place. Storage virtualization software goes exactly at those challenges.

Cole: What about networking? Shouldn't we be looking at the entirety of data infrastructure when planning for top-tier functionality?

Gonzalez:  As users have to deal with the new dynamics and faster pace of today’s business, they can no longer be trapped within yesterday’s more rigid and hardwired architecture models. Infrastructure is constructed on three pillars — computing, networking and storage — and in each area hardware decisions will take a back seat to a world dictated by software and driven by applications.

Networking, though key, is relatively well understood and the lessons we learned from distributed computing continue to apply. The stumbling block is not in getting users connected to their apps on the front end, but getting robust, predictable response from the apps to pools of tiered storage devices on the back end.

A portion of this communication is storage networking, since much of it travels over iSCSI or Fibre Channel SANs. Some of it travels over virtual channels and virtual switches, to be sure, yet they behave much like you would expect. Only when you step outside the server or the storage device are you in a position to control, prioritize and flex the storage network infrastructure accordingly to meet the service levels of competing virtualized Tier 1 apps.

Add Comment      Leave a comment on this blog post

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.