DarwinAI Applies AI to Make AI More Efficient

    DarwinAI today unveiled a Darwin Generative Synthesis platform that promises to significantly optimize the performance of artificial intelligence (AI) models developed using deep learning algorithms.

    Company CEO Sheldon Fernandez says the reason most AI models based on deep learning algorithms run slowly is because neural networks require an extended amount of trial and error to identify patterns. Darwin Generative Synthesis reverse engineers those patterns and algorithms after an AI model is created in a way that enables it to accomplish a task much more efficiently, says Fernandez.

    That’s a critical requirement not only in terms of improving performance, but also in optimizing the usage of expensive graphical processor units (GPUs) on which most neural networks run, adds Fernandez.

    Fresh off raising $3 million in seed funding, Fernandez says it’s already become apparent that the processes associated with training AI models are so laborious that the model created is usually inefficient.

    “There’s a need for a high level of optimization,” says Fernandez.

    The DarwinAI approach to optimizing those models involves making available engineers who work closely with the developer of any given model to employ the best algorithms for the task and optimize performance using a set of AI tools that DarwinAI developed using the open source TensorFlow framework. In effect, DarwinAI is using AI to optimize AI, says Fernandez.

    DarwinAI revealed that its tools have already been employed to generate a deep neural network for image classification that is 4.5 times more computationally efficient than the one produced by Google’s popular AutoML and Learn2Compress platforms.  The same technology also generated an optimized version of DetectNet, Nvidia’s object detection network, that is 12 times smaller and four times faster than the original, the company claims.

    As an additional side benefit of that capability, Fernandez says Darwin Generative Synthesis makes it much simpler to document AI processes in a way that makes them more easily understood by developers and ultimately explainable to business users that fund AI projects. That capability will ultimately be crucial when it comes time to document how an AI-infused process works as part of any audit that may need to be passed, adds Fernandez.

    Longer term, Fernandez says Darwin Generative Synthesis will become a key element of more structured approaches to building AI models that will borrow best practices pioneered by organizations that have embraced DevOps.

    It may take a while for the development of AI models to become more structured than they typically are today. But as the development of AI models becomes more efficient, the rate at which those AI models will be embedded in business processes will almost certainly accelerate in the months and years ahead.

    Mike Vizard
    Mike Vizard
    Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

    Latest Articles