The platform, called Vertex AI, requires nearly 80% fewer lines of code to train a model versus competitive platforms, says the company, enabling data scientists and ML engineers across all levels of expertise the ability to implement Machine Learning Operations (MLOps) to efficiently build and manage ML projects throughout the entire development lifecycle.
“We had two guiding lights while building Vertex AI: get data scientists and engineers out of the orchestration weeds, and create a industry-wide shift that would make everyone get serious about moving AI out of pilot purgatory and into full-scale production,” says Andrew Moore, vice president and general manager of Cloud AI and Industry Solutions at Google Cloud. “We are very proud of what we came up with in this platform, as it enables serious deployments for a new generation of AI that will empower data scientists and engineers to do fulfilling and creative work.”
Currently, says the company, data scientists grapple with the challenge of manually piecing together ML point solutions, creating a lag time in model development and experimentation, resulting in very few models making it into production. Designed to tackle these challenges, Vertex AI brings together the Google Cloud services for building ML under one unified UI and API, to simplify the process of building, training, and deploying machine learning models at scale.
In the Vertex AI single environment, users can move models from experimentation to production faster, more efficiently discover patterns and anomalies, make better predictions and decisions, and generally be more agile in the face of shifting market dynamics. Now, says the company, for the first time, with Vertex AI, data science and ML engineering teams can:
- Access the AI toolkit used internally to power Google that includes computer vision, language, conversation and structured data, continuously enhanced by Google Research.
- Deploy more, useful AI applications, faster with new MLOps features like Vertex Vizier, which increases the rate of experimentation, the fully managed Vertex Feature Store to help practitioners serve, share, and reuse ML features, and Vertex Experiments to accelerate the deployment of models into production with faster model selection.
- Manage models with confidence by removing the complexity of self-service model maintenance and repeatability with MLOps tools like Vertex Continuous Monitoring and Vertex Pipelines to streamline the end-to-end ML workflow.
The Vertex AI platform is generally available now.