More

    Should We Celebrate Reaching 70 Percent Server Utilization?

    Slide Show

    Key Principles to Web-Scaling a Network

    Improving server utilization is like walking on a frozen pond during a spring thaw. The more comfortable the air temperature gets, the greater the danger of falling through.

    With utilization, the higher you go, the less overhead you have when the inevitable data spikes arrive. Sure, you could have cloud-based IaaS at the ready, but now you are simply leasing underutilized resources rather than buying them.

    This is why the tendency to view recent reports of underutilized servers with consternation is wrong-headed. The latest is Anthesis Group’s finding that 30 percent of servers worldwide are “comatose,” says eWeek’s Jeffrey Burt, representing about $30 billion in “wasted” IT infrastructure. This may cause non-IT people to wring their hands, but anyone who has even a modicum of experience in data infrastructure will know that a 70 percent utilization rate is actually quite good—in fact, it is historically high given that in the days before virtualization, a typical server could sit idle maybe 80 percent of the time.

    The simple solution for unused electrical equipment is to simply pull the plug, but things are a little more nuanced in the data center, says Tech Republic’s Michael Kassner. No one wants to be without resources in this anywhere/any time digital economy, so organizations will have to implement a range of software and hardware constructs to monitor and manage data and electrical loads so that servers and other components can be powered up and powered down as needs demand. This will incorporate not only the latest Data Center Infrastructure Management (DCIM) solutions, but also systems that drill down to the microprocessor level where, as it turns out, serial links between chips are still showing utilization rates as low as 30 percent.

    Some hardware vendors are looking at utilization as an opportunity to move high-end systems at a time when commodity, white-box infrastructure seems all the rage. IBM recently introduced the Power System E850 with a guarantee that it can push utilization up to 70 percent without compromising performance. The four-socket system packs 4TB of memory and even comes with a unique pricing model that charges only for the cores that are powered up within the box. That gives the enterprise a double incentive to use only what they need and to scale consumption dynamically in response to demand.

    Part of the problem the data center faces in regards to utilization is that simple virtualization has largely run its course; it’s producing the 70 percent rates being reported now but is unlikely to deliver much more. Fortunately, the new breed of virtualization called containers has the potential to reboot the entire process, says Cloud Technology Partners architect Mike Kavis. Containers are vastly less cumbersome than virtual architecture and consume less compute and memory, so they can better distribute workloads on available resources and perhaps even drive utilization rates ahead by several orders of magnitude.

    For those who say that 70 percent utilization is pretty good, but we should not rest until we achieve 100 percent across the board, well, I refer to my earlier comment about thin ice. Over-provisioning still serves a purpose, which becomes abundantly clear to users only when service is lost. And the fact remains that both the hardware and software needed to ramp up servers from a powerless state to fully active in the blink of an eye are simply not available yet, particularly for emerging scale-out infrastructure responsible for real-time load balancing.

    Improving server utilization will continue to be a work in progress for some time. But given the situation as little as five years ago, I would say that today’s 70 percent ratio is a glass that is more than half full.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles