Dell EMC Makes Case for Deploying AI Model On-Premises

    Dell EMC is betting that many more artificial intelligence (AI) applications wind up being built and deployed in an on-premises environment than most IT professionals would expect. Dell EMC has unveiled a series of Dell EMC Ready Solutions for AI that make use of Intel processors to optimize machine learning algorithms and graphical processor units from NVIDIA to optimize deep learning algorithms, also known as neural networks.

    Jon Siegal, vice president of product marketing for Dell Technologies, says Dell EMC is making available PowerEdge series servers configured with both types of processors that come pre-configured with Isilon storage systems, a network-attached storage (NAS) system optimized for locally storing massive amounts on Flash storage systems accessible to thousands of processors. That approach will enable IT organizations to deploy high-performance AI models based on a Big Data repository that can, for example, identify fraudulent transactions as they are being processed, says Siegal.

    “Speed matters when it comes to AI,” says Siegal.

    Siegal also notes that there are a host of compliance and security reasons involving data locality that would require organizations to deploy AI models on-premises.

    To make it simpler to set up those environments, Dell EMC is also making available a consulting service that will configure Dell EMC Ready Solutions for AI. The Big Data repository that Dell EMC is making available as part of the service is based on a distribution of Hadoop from Cloudera. Dell EMC claims these systems can improve data scientist productivity by up to 30 percent and reduce time-to-operations by six to 12 months.

    New Data Science Provisioning Portal from Dell EMC reduces the steps to configure a data scientist’s workspace to just five clicks.

    Siegal says the goal is to make it simpler for data scientists to create AI models via a self-service IT environment. Today, too many data scientists waste massive amounts of their time on IT plumbing versus analyzing data and building AI models, notes Siegal.

    It remains to be seen where AI models might wind up being built versus deployed. It may be more cost effective to, for example, train an AI model using massive amounts of data inexpensively stored in a cloud. But when it comes time to deploying those AI models in a production environment, chances are that many of those AI models will be running right next to the enterprise applications they are meant to enhance.

    Mike Vizard
    Mike Vizard
    Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles