More

    Hardware in a Software-Defined Universe

    Slide Show

    Building High-Growth IT: 5 Things to Know Now

    The enterprise is rushing headlong into the era of abstract data infrastructure, with dreams of unbridled data federation and fungibility that allow just about anyone to compose a working environment ideally suited to their needs.

    But as we’ve seen all too often in the past, reality has a way of foiling the technological nirvana that drives the initial enthusiasm of a particular development – not to the point where the idea is crushed into oblivion, but to where the final product is not quite as perfect as initially expected.

    To Quocirca’s Clive Longbottom, the endgame in this process is the establishment of Infrastructure as Code (IaC), in which automation takes care of most hardware provisioning and management and definition of the actual working environment of a given application or service is laid out in its development. But even within the IaC concept, variants can affect the way resources are deployed and provisioned. A declarative approach to IaC, for example, starts with a required application state and adapts infrastructure to suit its conditions, while an imperative version utilizes hard definitions of infrastructure as laid out in script. An intelligent state incorporates data from previous workloads and other factors to produce a more dynamic, continuous devops process. Depending on the application at hand, each of these approaches can help or hinder overall functionality.

    It is also tempting to think of a world where the enterprise can simply ignore bare-metal provisioning, since all the real action is taking place on the virtual layer or above. But this would be a mistake, says tech consultant Keith Townsend, because even the most abstract architecture must engage with physical resources at some point, and failure to oversee that connection can result in wasted effort and cost overruns. This is more challenging than it sounds, considering that most organizations are utilizing multivendor hardware layouts, each with its own orchestration solution. Solutions like OpenStack and vRealize, as well as emerging tools like Puppet and Chef, can produce an “orchestrator of orchestrators,” but be prepared for some hefty coding to customize these platforms for your legacy environment.

    A hyperconverged infrastructure can also provide a great deal of flexibility when defining abstract data environments, says HPE’s Said Syed. For one thing, HCI reduces the burden of physical maintenance and management, to the point that specialized knowledge is not needed to swap out failed components. It is also more amenable to automation and dynamic resource configuration because changes can be implemented in minutes, usually with just a few mouse clicks. And the cost of deployment and operations is lower, as are the space requirements, allowing organizations to scale resources to impressive levels quickly and on-budget. (Disclosure: I provide content services to HPE.)

    The enterprise also needs to consider the role that increasingly distributed physical footprints will have on abstract resource provisioning and management. As Riverbed Technology’s Joe Bombagi noted recently, remote branch offices already hold roughly half of enterprise data, and this level is likely to increase as edge computing starts to take on IoT and Big Data workloads. To properly incorporate remote office/branch office (ROBO) infrastructure into an abstract data ecosystem, organizations should ensure that compute and storage are separated – with compute on the edge and storage in a centralized facility – to provide a stateless edge that can more easily be managed from afar. At the same time, an optimized wide area network (WAN) goes a long way toward improving flexibility and decreasing lag time when it comes to compiling resources across regions.

    The abstract, federated data environment will evolve in one form or another over the next decade or so, but like earlier IT advancements, it will fail to meet all of the expectations that fuel its development at the moment. As well, it is fair to say that it will likely produce additional challenges to infrastructure and architectural management even as it solves many of the deficiencies that plague current data environments.

    But as data and services become the business at most enterprises, rather than the means to support ongoing business activity, organizations will need high degrees of flexibility and rapid turnaround of data-driven processes in order to remain competitive. And that is something that simply cannot happen in today’s static, silo-laden infrastructure.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles