Today, about 75 percent of all workloads in data centers are virtualized and this number is only expected to grow. The biggest challenge IT admins face is that conventional storage is ill-equipped to deal with virtualization because the storage is built for physical workloads.
Problems arise as legacy storage, with logical unit numbers (LUNs) and volumes that might house tens or hundreds of individual virtual machines (VMs), causes resident VMs to fight over limited resources. This is a phenomenon called the “noisy neighbor.” While one common solution is to throw more high-performance flash storage at the problem, this alone cannot fix the problem. It simply postpones dealing with the underlying problem (LUNs). Costs can spiral out of control as an all-flash storage architecture dedicated to LUNs and volumes does not necessarily overcome the pain points of managing virtual workloads.
While many companies aspire to build cloud-scale infrastructures with agility and automation for diverse virtualized workloads, they have been forced to choose between limited scale-out that requires a large number of disks or expensive and inefficient scale-out. According to Chuck Dubuque, senior director of product and solution marketing for Tintri, five key areas that are critical for successful data center modernization efforts include speed, quality of service (QoS), disaster recovery, predictive data analytics, and manageability at scale.
Modernizing the Data Center
Click through for five areas organizations must address for a successful data center modernization, as identified by Chuck Dubuque, senior director of product and solution marketing for Tintri.
When it comes to running a data center, the last thing employees want is to be tied down with IO performance and latency concerns, even when using flash storage.
With conventional storage, IO requests are handled sequentially. So, a mission-critical test for the development team gets stuck behind a massive (and relatively unimportant) database update. And it’s why boot storms and antivirus scans can cripple VDI user experience.
Within a LUN, if a single VM acts like a noisy neighbor and demands more than its share of performance, it can negatively affect the performance of other VMs in that same LUN. Fortunately, there’s an alternative — more organizations are turning to VM-aware storage (VAS), which uses individual VMs as the unit of management.
With VM-aware storage, IT can give every VM its own performance lane. There are no LUNs, and so there are no neighbors. If an individual VM goes wrong, it doesn’t affect any other VMs on the VAS storage platform. Rather than stack up actions sequentially, VM-aware storage handles them simultaneously to end the performance hiccups that can be so pervasive. Without the limitations of traditional, physical-first storage, virtualized applications perform (on average) six times faster.
Quality of Service (QoS)
There is plenty of talk about QoS, but rarely is it clearly defined. In the past, storage systems set minimum and maximum IOPS at the volume level, which means the dozens of very different VMs inside get the same level of QoS.
VM-level QoS allows IT managers to set specific parameters for each VM — simply toggle the minimum and maximum IOPS as desired, to impose a ceiling on a rogue VM or ensure resources for a mission-critical VM (e.g., a finance server at end of month). VM-level QoS can also be used to create multiple tiers of service on a single platform. In the past, enterprises and service providers typically bought multiple storage devices, with some dedicated to “gold” applications, others to “silver” applications and so on. The array itself was the dividing line. With VM-level QoS, you can establish gold, silver, bronze and/or other tiers on one storage device, and then assign each VM to a tier.
IT pros need to establish a good disaster recovery (DR) plan that includes per-application/VM replication along with simplicity and automation. A plan should categorize applications/VMs according to their business criticality. In the event of a disaster, mission-critical applications need to be up and running in a very short timeframe. This requires that the RPO and RTO of such applications be defined in a granular fashion to meet specific SLAs. Having a high-performance per-application/VM replication capability could be hugely beneficial in getting the critical applications up and running in minutes.
Additionally, this DR plan should account for the ability to automate workflows such as site failover, failback and planned migrations. Given that a vast majority of applications in an enterprise environment today are virtualized, the DR plan should recommend solutions that natively integrate with recovery tools (such as VMware Site Recovery Manager, etc.) for virtualized servers. Such capability would assist the infrastructure managers to set up and execute DR plans with small (or) negligible recovery windows.
Predictive Data Analytics
IT organizations and service providers typically can’t predict what virtualized workloads they might add or modify next. They can only guess at performance or capacity requirements and often buy more capacity than they really need.
Data analytics is crucial as it provides IT pros with information to make better decisions about application behaviors and storage needs. Predictive analytics in particular make it possible for data center professionals to trend their use of capacity and performance, and anticipate future needs. Advanced tools will also allow the user to model scenarios, so they can precisely assess the impact of changes to their virtual footprint.
Importantly, this type of analysis doesn’t take minutes or hours — modern technologies like Elasticsearch make it possible to crunch thousands or even millions of data points in seconds. The bottom line is that predictive analytics replaces guesswork with visibility and precision.
As their virtual infrastructure grows to tens and hundreds of thousands of VMs, IT admins need to simplify storage management and avoid constant manual configuration of LUNs and volumes.
Managing storage at the VM-level enables IT to automate and optimize placement of VMs constantly across their pool of storage, taking into account space savings, resources required, and the cost in time and data to move VMs. When a VM is moved, the associated snapshots, statistics, protection and QoS policies should migrate as well using a compressed and deduplicated replication protocol.
The automation and storage intelligence provided by policy-based VM management combined with advanced analytics and QoS allows enterprises and service providers to often triple or quadruple their virtualized infrastructure without adding a corresponding number of dedicated storage personnel.