At a Pure Accelerate 2017 conference this week, Pure Storage moved to incorporate artificial intelligence (AI) by employing advanced machine learning algorithms to automate the management of storage, while at the same time expanding its storage system portfolio to support advanced artificial intelligence applications requiring access to massive amounts of data.
Matt Kixmoeller, vice president of products for Pure Storage, says thanks to advances in machine learning algorithms, IT organizations are about to witness a new era of self-driving storage. To accelerate that shift, Kixmoeller says, Pure Storage has developed Pure1 META, an artificial intelligence platform that Pure Storage is employing to collect telemetry data that it will use to allow customers to discover and predict performance and storage capacity issues.
At the same time, Kixmoeller says rapid adoption in the last year of AI applications across multiple vertical industries is increasing demand for high-capacity storage. AI applications typically need to be trained to identify patterns using massive amounts of data.
“The more data AI applications have access to, the better the outcome,” says Kixmoeller.https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=i
To provide the storage for those applications, Pure Storage this week announced it has increased the available capacity of its FlashBlade system to a total raw capacity of 4PB that Pure Storage says provides an effective capacity of 8PB. That increased capacity was accomplished mainly by making it possible to install as many as 75 blades per system, which can now be configured with as much as 17TB per blade. That system can now also support the S3 object store protocol developed by Amazon Web Services (AWS) to foster hybrid cloud storage deployments.
Meanwhile, on the storage array part of its portfolio, Pure Storage announced version 5.0 of Purity//FA, which adds support for active clusters across a metro area network, cloud integration, quality of service (QoS) controls, NVMe-based Flash storage, Docker containers, and Virtual Volume software for VMware environments.
There’s no doubt that organizations of all sizes in the months and years ahead will be accessing and storing exponential amounts of data. IT organizations are obviously going to need to confront a host of issues to support a myriad of advanced applications trying to repeatedly access ever larger data sets. But the one thing that should be clear is that if the value of all that data is ever to be derived, the first thing that needs to be addressed is how to efficiently store it all.