Hyperscale: Not Just About Scale Anymore

    Slide Show

    Eight Critical Forces Shaping Data Center Strategy

    The enterprise/IT industry has traditionally been segmented into four major groups: the SMB/SOHO, the mid-level organization, the large enterprise of the Fortune 5000, and the newest member: the hyperscale environments of Google, Facebook and other Web-facing entities.

    The historical pattern has been for technologies developed for the big boys to trickle down to the smaller fry, gradually enabling advanced capabilities to percolate throughout the entire industry. When it came to hyperscale, however, the thinking was that most of the supporting technology, which was primarily customized anyway, would not apply to the average data center because the levels of scale simply were not needed. Even if Ford Motor Company were to shed all its dealerships and sell cars exclusively online, it would not approach the volume of, say, Amazon.

    But that attitude appears to be changing as IT executives gain a greater understanding of what hyperscale means and how it can help transform legacy infrastructure into the modular, cloud-ready environments of the future.

    Indeed, a growing chorus of voices is arguing that hyperscale is the model on which all future IT development should be based. At a recent Gartner event, top analysts were adamant that as standards-based infrastructure built around open APIs leads to ever increasing abstraction, the need for scale-out architectures will become paramount for the enterprise. Besides the ability to scale to meet daily workloads, these types of architectures will also improve reliability and continuity and will, in fact, produce a lower TCO than current infrastructure, perhaps by as much as 30 percent.

    Expect to see a push for hyperscale from leading platform vendors as well. HP’s John Gromala, senior director of hyperscale product management, recently laid out the case for key project Moonshot systems, such as the ProLiant SL2500 and SL4500, in helping organizations of all sizes gear up for Big Data analytics and other data-intensive workloads. With up to 60 drives and more than 2PB of storage, the modules can be configured with a range of processors and other features to tailor them to specific use cases. In fact, the company is already looking beyond mere hyperscale to fully software-based infrastructure housed in integrated container modules capable of being deployed at a moment’s notice.

    Indeed, hyperscale provides a range of advantages even to organizations that do not require hyperscale. A key benefit is efficiency, according to leaders of Facebook’s Open Compute Project, with 100-fold improvements within the realm of possibility over the next few years. A key priority in Facebook’s infrastructure development was to enable hyperscale without blowing the power envelope or pushing maintenance and operational costs to unworkable levels. In that vein, the project’s blueprints call for not only liberal use of Flash storage and other high-efficiency technology, but also optimized application delivery and streamlined software code as well. Virtually all of this can be applied to current data center infrastructure even if the need for scale is not front and center.

    One of the chief differences between hyperscale and traditional infrastructure, though, is the need to view the entire environment as a cohesive whole, says ServerLift’s David Zuckerman. That means, rather than boosting capacity with yet another pre-integrated system which may or may not be fully compatible with other pre-integrated systems already in place, hyperscale relies on greater openness and a high degree of systems and architectural management. This approach is generally more complex and takes longer to implement at first, but the longer-term benefits of greater efficiency and the ability to meet current and future data loads pays off in the end.

    It is a mistake, then, to think that hyperscale is just about scale. Rather, it encompasses nearly all of the changes that are taking place in data environments to date, from virtualization to the cloud to low-power hardware to advanced analytics and application management.

    A hyperscale approach allows you to do more with less, but it also provides the flexibility to ramp up infrastructure when you need to do more with more.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Latest Articles