More

    Recasting Infrastructure for AI Workloads

    Artificial intelligence (AI) is not only changing the way data and business processes are carried out, it is leading to a broad reconfiguration of underlying infrastructure as well.

    Clearly, the enterprise cannot support a cutting-edge tool like AI on yesterday’s hardware, but how exactly will infrastructure need to change in order to support the intelligent, dynamic data operations that are poised to remake business models the world over?

    According to database administrator Cheryl Adams, a key design consideration for AI-facing infrastructure is the need to support high I/O on large volumes of data. This can be a challenge because it requires high-speed read and write access at relatively low cost, which is why most cloud providers are hosting their AI capabilities on commodity hardware. Support for multi-format storage infrastructure is also crucial, given that machine learning, cognitive computing and other forms of AI must pull both structured and unstructured data from multiple sources that rely on iSCSI, NFS, SMB and other solutions.

    It is also important to recognize that AI is not a plug-and-play technology and not all AI deployments are the same, says Mariya Yao, CTO of research firm TopBots. If you only have a few TB of data under management, you probably don’t need a full-blown Hadoop architecture. If you require more static analysis but little or no real-time prediction, then a high-speed Spark solution is best avoided. In fact, many organizations are finding that even for an advanced application like deep learning, an ensemble solution that mixes legacy architectures and emerging statistical methods actually outperforms advanced neural networks in key situations.

    Naturally, however, an emerging AI environment will need targeted support on the processor level, which is why current vendors are working to overhaul their portfolios. Nvidia recently teamed up with a number of ODM manufacturers, including Foxconn and Quanta, to optimize hardware solutions around intelligent operations. The program centers on the Nvidia HGX architecture that currently supports the company’s DGX-1 supercomputer and AI-facing data centers at Microsoft and Facebook. Under the agreement, the ODMs will be able to create modular AI and hyperscale infrastructure at a faster pace, utilizing Nvidia technologies like the Tesla GPU and the NVLink interconnect.

    Traditional hardware platforms are also coalescing around AI, giving the enterprise a means to integrate the technology into legacy infrastructure. HPE’s Apollo 10 HPC server is positioned as an entry-level platform to help organizations get started on deep learning development and other AI applications. Once workloads ramp up, the 10 series can be supplement by the sx40 and pc40 platforms that utilize Xeon Gen10 servers and a variety of GPUs. The advantage in going with a vendor solution rather than commodity hardware is that you get integrated firmware for security, cluster management and additional services.

    Ultimately, AI environments will have to be highly customized for individual workloads and business models, and much of this will take place on the architectural level rather than infrastructure. But this does not mean AI can be deployed on just any old box.

    The new infrastructure will have to push speed, agility and scale to entirely new levels if the enterprise hopes to draw meaningful results from all that number-crunching. Before the first system is brought online, then, it will help to have already worked out what you want your AI infrastructure to do for you and what systems it will interact with to pull the data it needs.

    Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.

    Arthur Cole
    Arthur Cole
    With more than 20 years of experience in technology journalism, Arthur has written on the rise of everything from the first digital video editing platforms to virtualization, advanced cloud architectures and the Internet of Things. He is a regular contributor to IT Business Edge and Enterprise Networking Planet and provides blog posts and other web content to numerous company web sites in the high-tech and data communications industries.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles