Everyone likes to get new stuff. Heck, that’s what Christmas is all about, and why it has emerged as a primary driver of the world economy.
In the data center, new stuff comes in the form of hardware and/or software, which lately have formed the underpinnings of entirely new data architectures. But while capital spending decisions almost always focus on improving performance, reducing costs or both, how successful has the IT industry been in achieving these goals over the years?
According to infrastructure consulting firm Bigstep, the answer is not very. The group recently released an admittedly controversial study that claims most organizations would see a 60 percent performance boost by running their data centers on bare metal infrastructure. Using common benchmarks like Linpack, SysBench and TPC-DC, the group contends that multiple layers of hardware and software actually hamper system performance and diminish the investment that enterprises make in raw server, storage and network resources. Even such basic choices as the operating system and dual-core vs. single-core processing can affect performance by as much as 20 percent, and then the problem is compounded through advanced techniques like hyperthreading and shared memory access.
Even supposedly high-performance solutions like Flash memory can have a dark side when surrounding infrastructure is over-engineered. Alibaba’s chief technologist Wu Peng told the recent Flash Memory Summit that performance degradation is a significant problem and highly complex architectures make it difficult for applications to assess the health of underlying hardware. The company is currently trying to streamline its hardware and software builds, concentrating first on error correction on the RAID level and then looking at new programming models in the long term. Alibaba, by the way, is committed to an all-Flash architecture and is responsible for about 1 percent of worldwide Flash consumption.
The real cause of over-architected data environments isn’t the technology, however, it’s the people in charge, says tech consultant Joe Stanganelli. Performance shortfalls are usually the product of poor management, rather than a lack of resources. New administrators often inherit the mess left by old administrators and then resort to patchwork problem-solving rather than a reordering of underlying infrastructure. C-level executives should keep in mind that the answer to poor resource management isn’t necessarily new resources, but new management.
To hear some tell it, though, humans won’t be of much use anyway once data center automation really kicks in. Former Sun Microsystems CEO Vinod Khosla told the recent Gigaom Structure conference in San Francisco that with data architectures on the verge of truly dynamic, software-driven configurability, human operators will be more of a hindrance than a help. Automated processes are certainly not foolproof, but the history of IT is also rife with tales of major system failures and data losses due to human error.
Automation is not without fault either, of course. And without a human operator to keep an eye on things, automated failure could potentially be even more devastating.
Clearly, though, the enterprise industry sees value beyond mere performance when it comes to virtual and cloud-based architectures, as well as Flash, multicore, SDN and all the other developments that have come along.
Resource flexibility, application- and user-centric functionality and distributed data operations all help to drive business models into the 21st Century – even if they don’t eke every last byte per second out of available silicon.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata, Carpathia and NetMagic.