HPE Makes Case for Composable Infrastructure

    Slide Show

    Building High-Growth IT: 5 Things to Know Now

    As enterprise continues to evolve, it’s become obvious that most IT organizations are going to wind up supporting multiple types of workloads running on bare-metal servers, virtual machines and containers such as Docker.

    At an HPE Discover conference this week, Hewlett-Packard Enterprise (HPE) added a programmable Synergy layer across the instance of OpenStack it provides on an HPE Helion CloudSystem 10 server in addition to making it simpler to compose workspace on the HPE Hyper Converged appliance.

    Paul Miller, vice president of marketing for converged data infrastructure at HPE, says it’s apparent that IT organizations will be deploying both converged and hyperconverged infrastructure (HCI) systems, with the former being used to scale out compute, storage and networking in isolation, while the latter are used to holistically scale out IT infrastructure resources using integrated appliances.

    A composable approach to managing IT, says Miller, will make it possible for IT operations teams to expose self-service portals through which application workloads can be automatically deployed on the type of IT infrastructure that makes the most economic sense. This is becoming a critical requirement, adds Miller, because IT organizations are now being asked to build, deploy and manage more applications than ever.

    “IT organizations are going to need to figure out what the right mix is,” says Miller.

    Miller notes that both new HPE offerings are part of a continuum of data center advances being driven by a need to holistically address IT infrastructure resources at a time when advances in memory promise to make data center environments much denser than ever before.

    In fact, to illustrate that point, HPE this week also announced that in combination with software it provides, it has driven the cost of Flash memory down to 3 cents per usable gigabyte per month using on-premises IT systems.

    At the other end of the spectrum, HPE this week demonstrated The Machine, which makes use of non-volatile memory running on a system-on-chip (SOC) architecture that HPE says will result in servers that are 8,000 times faster for most application workloads.

    Going forward, the data centers of tomorrow will soon be unrecognizable. Instead of massive numbers of servers and storage systems interconnected by miles of network cables, data center environments will be much denser. That means data centers will require much less physical space and energy while still being able to deliver several orders of magnitude in application performance improvements. The challenge facing IT organizations will be finding a way to programmatically manage data center environments that may be physically smaller, but more complex than ever, in terms of the number and types of application workloads that are simultaneously running.


    Mike Vizard
    Mike Vizard
    Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

    Latest Articles