IBM today unveiled a variety of tools intended to make data scientists a lot more productive. An update to PowerAI deep learning software provides access to tools that make it simpler to train artificial intelligence (AI) models as well as integrate them with the Apache Spark in-memory computing framework. At the same time, IBM is making available Data Science Experience Local, a collaboration application for data scientists that can be deployed on premise.
Sumit Gupta, vice president for high performance computing and data analytics for IBM, says the new additions to PowerAI make it possible for data scientists to train AI models at a much higher level of abstraction. The tools are provided along with open source tools for building AI models such as TensorFlow and Café that IBM curates with a Power AI distribution optimized for Power Series servers.
Gupta says the difference between AI and traditional applications is that AI applications are based on models that need to be continuously trained versus programmed. The reason more AI applications are being developed, says Gupta, is because it’s now possible to expose deep learning algorithms to massive amounts of data. That convergence of algorithms and data will be felt across every major industry, says Gupta.
Gupta notes that many of the core algorithms being used to develop AI models have been around for decades. IBM is now making those algorithms more accessible in addition to providing tools for monitoring their performance.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=i
“There is a revolution coming that is being driven by AI,” says Gupta.
While mastering AI technologies is still a challenge for most organizations, it’s now more a matter of when versus if when it comes to AI applications becoming commonplace. The real issue is figuring out where to apply those AI models in ways that drive new business processes never thought possible.