Cloudera put out a call this week for the IT industry to define a set of open standards for machine learning operations (MLOps) and machine learning model governance that could be universally applied.
Santiago Giraldo, senior product marketing manager for data engineering for Cloudera, said the idea is to create the equivalent of a set of best DevOps practices to create the equivalent of a set of best DevOps practices for employing machine learning models employed by artificial intelligence (AI) applications that will serve to foster portability, interoperability and explainability.
In the absence of such standards, most AI models are essentially black boxes optimized for specific platforms that many organizations are reluctant to adopt because no one is quite sure how they operate. Furthermore, it’s not all that feasible right now to integrate multiple AI models built using different classes of tools.
At the same time, Giraldo said the further down the AI path organizations go, the more they realize the AI models they are building need to be either updated frequently or outright replaced as new data sources become available. Most AI models are trained to respond to specific events based on access to a finite set of data. As new data sources are introduced, organizations are discovering the accuracy of recommendations being made by an AI model start to degrade. In fact, only a limited number of AI models are making it into production environments today largely because they didn’t scale as expected.
Cloudera is already moving down the MLOps path with Apache Atlas, an open source framework designed to integrate data management across explainable, interoperable and reproducible MLOps workflows. Now Cloudera is inviting the industry, including competitors, to collaborate on a broader set of metadata specifications that Anduago said is more a call to action than a specific proposal.
It’s unclear precisely how such an initiative might be managed. There is no widely recognized consortium initiative focused specifically on MLOps. Thus far, the closest thing might be The Deep Learning Foundation, an arm of The Linux Foundation that connects organizations working on innovative technical projects focused on AI and machine learning.
Despite all the hype, it’s apparent that AI is still in its infancy in terms of practical use cases. Most organizations can’t afford to hire the data scientists required to build AI models, let alone find them. Even when they do, the competition for AI talent is so fierce that organizations are also finding it difficult to retain that talent. Giraldo said the goal should be to make it easier not only to build AI models but also slipstream them into both new and existing applications.
Obviously, many of the best DevOps practices that have already been defined can be applied to AI models. The next big challenge might very well be educating data scientists on how to work more closely with developers to build, test and deploy AI models with applications at scale. In the meantime, the existing DevOps community might want to do as much as possible to help prevent a nascent MLOps community from reinventing some wheels that might already exist.