Sundar Pichai introduced Vertex AI to the world during the Google I/O 2021 conference last year, placing it against managed AI platforms such as Amazon Web Services (AWS) and Azure in the global AI market.
The Alphabet CEO once said, “Machine learning is a core, transformative way by which we’re rethinking how we’re doing everything.”
A November 2020 study by Gartner predicted a near-20% growth rate for managed services like Vertex AI. Gartner said that as enterprises invest more in mobility and remote collaboration technologies and infrastructure, growth in the public cloud industry will be sustained through 2024.
Vertex AI replaces legacy services like AI Platform Training and Prediction, AI Platform Data Labeling, AutoML Natural Language, AutoML Vision, AutoML Video, AutoML Tables, and Deep Learning Containers. Let’s take a look at how the platform has fared and what’s changed over the last year.
Also read: Top Artificial Intelligence (AI) Software
What Is Google Vertex AI?
Google Vertex AI is a cloud-based third-party machine learning (ML) platform for deploying and maintaining artificial intelligence (AI) models. The machine learning operations (MLOps) platform blends automated machine learning (AutoML) and AI Platform into a unified application programming interface (API), client library, and user interface (UI).
Previously, data scientists had to run millions of datasets to train algorithms. But the Vertex technology stack does the heavy lifting now. It has the computing power to solve complex problems and easily do billions of iterations. Vertex also comes up with the best algorithms for specific needs.
Vertex AI uses a standard ML workflow consisting of stages like data collection, data preparation, training, evaluation, deployment, and prediction. Although Vertex AI has many features, we’ll look at some of its key features here.
- Whole ML Workflow Under a Unified UI Umbrella: Vertex AI comes with a unified UI and API for every Google Cloud service based on AI.
- Integrates With Common Open-Source Frameworks: Vertex AI blends easily with commonly used open-source frameworks like PyTorch and TensorFlow and supports other ML tools through custom containers.
- Access to Pretrained APIs for Different Datasets: Vertex AI makes it easy to integrate video, images, translation, and natural language processing (NLP) with existing applications. It empowers people with minimal expertise and effort to train ML models to meet their business needs.
- End-to-End Data and AI Integration: Vertex AI Workbench enables Vertex AI to integrate natively with Dataproc, Dataflow, and BigQuery. As a result, users can either develop or run ML models in BigQuery or export data from BigQuery and execute ML models from Vertex AI Workbench.
What’s Included in the Latest Update?
Google understands research is the only way to become an AI-first organization. Many of Google’s product offerings initially started as internal research projects. DeepMind’s AlphaFold project led to running protein prediction models in Vertex AI.
Similarly, researching neural networks provided the groundwork for Vertex AI NAS, which allows data science teams to train models with lower latency and power requirements. Therefore, empathy plays a significant role when AI use cases are considered. Some of the latest offerings within Vertex AI from Google include:
According to Google, the AI training Reduction Server is an advanced technology that optimizes the latency and bandwidth of multisystem distributed training, which is a way of diversifying ML training across multiple machines, GPUs (graphics processing units), CPUs (central processing units), or custom chips. As a result, it reduces time and uses fewer resources to complete the training.
This feature aims to customize the ML model creation process. Tabular Workflows let the users decide which parts of the workflow they want AutoML technology to handle and which side they like to engineer themselves.
Vertex AI lets elements of Tabular Workflow be integrated into existing pipelines. Google also added the latest managed algorithms, including advanced research models like TabNet, advanced algorithms for feature selection, model distillation, and many more functions.
Serverless Apache Spark
Vertex AI has been integrated with serverless Apache Spark, a unified open-source yet large-scale data analytics engine. Vertex AI users can easily engage in a serverless Spark session for interactive code development.
The partnership of Google and Neo4j enables Vertex users to analyze data features in Neo4j’s platform and then deploy ML models with Vertex. Similarly, the collaboration between Labelbox and Google made it possible to access Labelbox’s data-labeling services for various datasets—images and text among the few—from the Vertex dashboard.
When data turns into mislabelled data, Example-based Explanations offer a better solution. The new feature of Vertex leverages Example-based Explanations to diagnose and solve data issues.
Problem-Solving With Vertex AI
Google claims that Vertex AI requires 80% fewer lines of coding than other platforms to train AI/ML models with custom libraries, and its custom tools support advanced ML coding. Vertex AI’s MLOps tools eliminate the complexity of self-service model maintenance, streamlining ML pipeline operations and Vertex Feature Store to serve, share, and use advanced ML features.
Data scientists with no formal AI/ML training can use Vertex AI, as it offers tools to manage data, create prototypes, experiment, and deploy ML models. It also allows them to interpret and monitor the AI/ML models in production.
A year after the launch of Vertex, Google is aligning itself toward real-world applications. The company’s mission is solving human problems, as showcased at Google I/O. This likely means that its efforts will be directed toward finding a transformative way of doing things through AI.
Read next: Top Data Lake Solutions for 2022