At Google I/O 2021, Google took to the main stage, announcing a new unified user experience, Google Vertex AI, and supportive tooling that aims to speed and scale artificial intelligence development efforts. 

Omdia view

Summary

At Google I/O 2021, Google took to the main stage as it has done for the past 13 years, showcasing what’s new for Android and Chrome developers. As is customary, the company extolled the value of Android, Google Photos, Google Maps, et al., but in a growing break from tradition, Google also focused artificial intelligence (AI) development in the enterprise with a series of announcements headlined by the general availability of a new, comprehensive and unified user experience, Google Vertex AI, and supportive tooling for AI development atop Google Cloud Platform (GCP).

Data science is and will always be a team sport

The fact that Google would dedicate a tremendous amount of time to AI development in the enterprise at its flagship Android and Chrome developer conference, speaks to the company’s and the industry’s continuing belief that an investment in data science has become an outright necessity for companies seeking to not just survive but also thrive within an increasingly disruptive business landscape. But in order to put data science—machine learning (ML) more specifically—to work across the enterprise, companies must first contend with the many complexities and challenges that stem from the fact that ML is and will always be a community effort spread across an array of discrete user roles and skills. And those roles often require a set of skills that are in high demand, a demand that has yet to be fulfilled in the marketplace. To address this skills gap, companies like Google are actively working to augment and automate human decision making through technologies like AutoML, something that figures heavily into Google’s Vertex AI launch. Further, because this team sport crosses the still perilous chasm between business and IT, enterprise practitioners have begun turning to an emerging and rapidly growing class of AI development platforms that promise to span that chasm through the operationalization of the entire ML lifecycle.

These MLOps platforms espouse notions commonly associated with DevOps, namely continuous integration and continuous delivery (CI/CD) of business value. They do so by uniting the many and varied user roles (data engineer, data scientist, business owner, systems engineer, and developer, just to name a few) and wrapping their efforts within a unified framework of well-integrated tools and user experiences. Unfortunately, there are very few MLOps solutions in the marketplace that have achieved a true level of unification across the ML lifecycle, as Omdia concluded in its recent review of these solutions (See Omdia Universe: Selecting an Enterprise MLOps Platform, 2021). While most solutions offer pointed capabilities targeting specific tasks such as data preparation and processing or model deployment and management, it is very rare to see a single platform operationalize the entire spectrum of ML development.

Orchestrating success through MLOps

With the release of its new, unified AI user experience, Vertex AI, Google intends to not only create that difficult-to-find unity of experience for all AI practitioners, but to use that common foundation plus Google’s in-house ML algorithm services as a way to propel ML projects to fruition and to do at scale without sacrificing management and governance requirements. In summary, Vertex AI is a managed ML platform that seeks to accelerate ML development by both orchestrating processes and cutting down on the amount of code required to build and train models. To accomplish this, Vertex AI combines a number of GCP services within a single user experience and supportive API. Within this new experience, developers have access to a number of new capabilities:

  • Seamless integration of GCP AI toolkits spanning vision, language, conversation, and structured data.
  • MLOps-centric tools including Google Vertex Vizier, Vertex Feature Store, and Vertex Experiments.
  • Management and governance tools including Vertex Continuous Monitoring, Vertex ML Edge Manager, and Vertex Pipelines.

Together these new tools, create for Google a much more complete and compelling AI platform, one that emphasizes full-lifecycle orchestration. But an interesting aspect of Google Vertex AI is that it isn’t just about unification and automation. It seeks to speed up AI development, something evidenced in the company’s new guided workflows (golden paths) that shepherd users down the most expedient and automated development route spanning ingestion, analysis, transformation, training, tracking, evaluation, and deployment. Advantageously, users are not bound to stay on this path, as they can drop down into fully manual functionality at each step, manually labeling data, bringing in unmanaged datasets, writing custom code, all without breaking the end-to-end management and governability that comes with automation.

In support of this approach Google is relying heavily on Cloud AutoML and taking advantage of the high differentiated deep learning (DL) capabilities such Neural Architecture Search (NAS)—the technology that Google’s self-driving car division, Waymo, uses to automatically build models for tasks such as object detection using radar images. It also uses Vertex Vizier, a black-box optimization service that helps users automate and accelerate the complex task of tuning hyperparameters. These automated capabilities provide a number of advantages for enterprise practitioners that go well beyond democratizing data science. They help to better balance the scales between speed and cost, enabling developers to more freely and cost-effectively experiment (finding the best algorithm and scoring the best models for example) by accelerating time to value with fewer dead ends and false starts. More than that, tools like Google Matching Engine promise to dramatically reduce infrastructure costs (up to 50% savings in CPU consumption and memory utilization, depending on the task at hand according to Google).

Google’s highly automated approach to ML development extends beyond AutoML to embrace numerous MLOps concerns, plying the company’s recently introduced feature store along with a host of supportive services including BigQuery (data warehouse), serverless computing (AI Platform Training), integration (Cloud Dataflow), model transparency (Explainable AI), trust (What-if Tool), repeatability (ML Metadata and Pipelines), etc. Taken together, Google hopes this tightly integrated suite of services creates a network effect for customers by both speeding up the ML lifecycle and encouraging broader adoption across the enterprise. For instance, using Google Pipelines, customers can build non-linear workflows to start up processes such as unit testing long before finishing the data modeling process. The more ML models stored within Google’s feature store, for example, the more use cases on offer for the company to explore through re-use and adaptation.

This same reasoning is evident in the way Google has incorporated highly operationalized supportive technologies into its platform, through which Google intends to create economies of scale for enterprise practitioners by doing away with the management of underlying resources while injecting value-added functionality within those resources. This philosophy manifests in two ways. First, Google’s unified AI platform enables any software built to consistently inherit underlying platform security, governance, and management capabilities. Often building software across a complex pipeline does not allow for consistent authentication, privacy, and security readiness. With security as an example, Google users have consistent access to supportive security services such as access transparency and data residency. Second, Google Vertex AI tightly integrates with BigQuery and BigQuery ML. This creates some advantageous pairings such as the use of Vertex Vizier and AutoML on top of BigQuery ML to further open up highly performant ML automation to enterprise users steeped in structured query language (SQL) development.

Research as the route to enterprise AI adoption

With a new unified experience, supported by MLOps orchestration, advanced modeling automation, consistent security tooling, and deep integration with several supportive data services, has Google jumped to the front of the line within the enterprise AI development marketplace? As with many complex propositions, that will depend on the customer requirements, maturity, and experience, etc. Undoubtedly, these changes bring Google into much closer alignment with enterprise MLOps platform market leader AWS in terms of AI development services integration/unification. And they create for Google a compelling AI development automation and augmentation story in general.

However, Omdia finds that with these enhancements Google is charting a unique course that emphasizes an aspect of the market where the company can explore undiscovered countries in search of competitive differentiation—namely, AI research. With differentiated technologies at the ready (e.g. Google TensorFlow), unique solutions to wide-scale challenges (e.g. Federated Learning within Google Gboard), and a significant set of internal research organizations (including Google Brain, Research, and DeepMind groups), Google stands as a dominant innovator and center of scientific power within the AI marketplace.

The new innovations announced at Google I/O centering on research-led DL technologies such as NAS, Vertex Vizier, Matching Engine, and AutoML Forecast will further solidify the company’s reputation among data scientists. More importantly, they signal the company’s intent to make GCP the preferred home for AI developers seeking out the best-in-breed AI algorithms and tooling. For example, while most of the company’s DL technology will be available as open source in some capacity, Google will strive to make its commercial implementations the preferred choice among buyers. In this way, all of the MLOps-focused capabilities also announced at the show (including Feature Store, ML Metadata, and Pipelines) principally play a supporting role to Google’s leading DL technologies, linking those with the company’s underlying AI hardware and software while providing an end-to-end, MLOps-savvy development experience for AI practitioners looking to gain an advantage through the adoption of Google’s DL technology. 

Will these new technologies and unified user experience be enough to convince buyers to make GCP their home base for AI development? Certainly with a more complete platform now in general availability, enterprise practitioners can view Google as a more enterprise-grade option with a more readily digestible portfolio of services. And with many of its new, research-led innovations tuned to run best on GCP, the company will be able to draw customers away from the competition, particularly those looking for both advanced functionality and a more augmented, guided development experience. Over the long term, however, Google will need to be very careful in how it handles its open source software, ensuring that the software it opens up to the market is just as capable as the rendition running on GCP. Further, building on top of its hybrid and multi-cloud platform, Google Anthos, the company will need to ensure that GCP itself becomes an open, multi-cloud capable platform open to working with and even managing data and technology running across the platforms of its competitors.

Appendix

Further reading

Omdia Universe: Selecting an Enterprise MLOps Platform, 2021 (April 2021)

Author

Bradley Shimmin, Chief Analyst, AI Platforms, Analytics and Data Management

askananalyst@omdia.com