Troubleshooting Virtualization and the Cloud

Arthur Cole

Arthur Cole spoke with Steve Garrison, vice president of marketing for Infoblox, about the requirements for a new generation of automation tools for the cloud.

 

The promise of the cloud is not that it simply provides a convenient means to scale up resources, but that it creates the framework for a highly dynamic infrastructure capable of supporting even the most complex data environment. However, that won't be possible without a new generation of automation technology capable of handling multiple tiers of compute, storage, networking, applications and data. Infoblox' Steve Garrison spells out the requirements and how they are being addressed today.

 

"The pace of virtualization has stalled, and despite all the hype, achieving a real cloud implementation is still a pipe dream for many."


Steve Garrison
VP of Marketing
Infoblox

Cole: Infoblox has argued that the true utility-style advantages of cloud computing won't be realized until more advanced automation tools become available. What, exactly, are we still lacking?

Garrison: The pace of virtualization has stalled, and despite all the hype, achieving a real cloud implementation is still a pipe dream for many. Applications that have complex networking, storage or software, or that carry critical functions, are the last to be virtualized. Causing this stall and, in turn, limited benefits, are many network infrastructure operational and troubleshooting procedures that need to be as refined in the virtual environment as they have been in the physical one. For example, restructured virtual storage must be configured and validated. Redundancy and security strategies must be re-evaluated and adapted to virtual infrastructure in which virtual machines are free to move from one physical server to another.

 

Unlike the simple servers and applications that have already been virtualized, the remaining systems may have tiered storage across SANs, multiple IP numbers on more than one VLAN and security that is configured into intrusion protection or switch hardware access lists. There are many things that could go wrong, both when first migrating an application to virtual infrastructure and then while maintaining it as virtual machines move between physical servers. This complexity is causing nontrivial roadblocks to the cloud and its perceived benefits.



Cole: Is it safe to assume that Infoblox is on the verge of just such a platform?

Garrison: Infoblox has built its business by developing systems that support mission-critical protocols that are the keys to any IP application. These systems simplify and improve reliability for complex network infrastructure services like domain name system (DNS) resolution, IP address assignment and IP address management (IPAM) services. They eliminate manual tools and scripts that are just too brittle and require too much overhead for today's highly dynamic networks and the need to address more end-user demands and support more and more IP-based devices. We are leveraging our expertise and unique technology developed over the last 12 years to automate these key services to build more real-time automation capabilities and tools so that the network infrastructure can easily conform to the new demands of a next-generation hybrid virtual, cloud and physical network and enterprises can reap all the rewards these new applications tout.


Cole: How do we overcome the inherent differences between physical, virtual and cloud infrastructure when it comes to automation? Won't application data still have to jump through various hoops to traverse each environment, increasing latency?

Garrison: By implementing automated network management today, spanning both virtual and conventional network architecture, the virtualization of complex systems is greatly simplified. Virtualization increases the number of virtual network devices, such as those employed by hypervisors, storage endpoints, virtual switches, and virtual network appliances, as well as numerous virtual machines on each physical host. As a result, a single physical server may support dozens or even scores of IP addresses on multiple sub-networks. Furthermore, the mapping between IP numbers and physical devices changes constantly due to Vmotion and the creation and removal of virtual machines. Automation and centralized management may be the only way to remain confident that data center policies and procedures have been maintained.


Troubleshooting complex virtualized data centers is also greatly improved with automated configuration management. The Yankee Group estimates that 90 percent of the time spent resolving data center problems is simply identifying the source of the problem. With the additional layers of complexity in virtual infrastructure, automated assistance in tracking down problems becomes critical.


There is still much value to gain from virtualization in today's data center. The advantages of time-to-value, availability, flexibility and reduced costs are still to be gained by virtualizing the conventional servers that remain in place today. But to gain these advantages, complexity must be tamed by automated and centralized system management.



Add Comment      Leave a comment on this blog post

Post a comment

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.


 

Resource centers

Business Intelligence

Business performance information for strategic and operational decision-making

SOA

SOA uses interoperable services grouped around business processes to ease data integration

Data Warehousing

Data warehousing helps companies make sense of their operational data