Dotscience today launched a namesake platform for building and deploying artificial intelligence (AI) models based on a set of best DevOps practices.
Company CEO Luke Marsden said that as organizations realize that AI models need to be trained and updated continuously, the need for a DevOps platform that accelerates that process will become more apparent. Today most AI models are trained over an extended amount of time and then deployed within an application environment. Over time, however, either more data becomes available or organizations determine the machine and deep learning algorithms originally used to create the AI model need to be updated or replaced. Whatever the underlying reason for replacing an AI model, a platform that addresses all the aspects of the AI model life cycle, including testing, reproducibility, accountability, collaboration and continuous delivery, is required, he said.
To address those requirements, the Dotscience platform facilitates concurrent collaboration across developer and operations teams, version control of the model creation process, tracking of provenance records in real-time, exploring and optimizing hyper-parameters when training a model and tracking workflows across multiple open source tools.
Most of the teams that build AI models have backgrounds in data science versus application development. As such, Marsden noted most of them have had little to no exposure to DevOps practices. In that absence of those processes, teams building AI models now routinely encounter issues such as having to navigate siloed data and technical debt, which all conspire to extend the time required to build AI model, Marsden said. In addition, teams building AI models need to keep track of not only versions, but also runs of their code that tie together input data with models and corresponding hyperparameters and metrics, he added.
In the absence of a DevOps platform such as Dotscience, it becomes challenging for organizations to document what changes were made to an AI model when, Marsden said. That governance issue has become especially problematic when it comes to AI models because organizations are coming under increased regulatory pressure to document how the AI models they are employing are built and updated.
Dotscience’s “The State of Development and Operations of AI Applications” report also published today identifies the top three challenges with AI workloads are duplicating work (33%), rewriting a model after a team member leaves (27.6%) and justifying its value (27%). Based on a survey of 500 industry professionals, the report also finds 52% of respondents track provenance manually using tools such as spreadsheets, while 27% don’t track provenance at all but think it is important. In total, the survey finds 63% of businesses report they are spending between $500,000 and $10 million on their AI efforts.
As more organizations rely on DevOps processes to build and deploy applications, there’s no doubt that teams building AI models that need to be inserted into those applications will have to fall in line with best DevOps practices. That challenge now is finding a way to extend those DevOps processes all the way back to the building of the AI models themselves. Only then is the AI model building and deployment process likely to become agile enough to stay relevant to the pace of change now occurring across digital business processes.