IBM today made its first foray into applying DevOps methodologies to building artificial intelligence (AI) applications. Announced at the IBM Think 2018 conference, the latest IBM tools and services for building AI applications include a Deep Learning as a Service capability that has been embedded with IBM Watson Studio development tools and Watson Data Kits that come prepopulated with data for specific vertical industries.
IBM also announced it has extended its alliance with Apple to combine IBM Watson machine learning with Apple Core ML to make it easier to build AI applications employing machine learning algorithms developed and curated by both companies.
Finally, IBM announced IBM Watson Assistant, a voice-enabled digital assistant that can be embedded into enterprise applications.
Ruchir Puri, chief architect for IBM Watson, said IBM is delivering a combination of services and tools that collectively will create a set of DevOps processes for managing the both the development of AI models and managing the complex data pipelines that need to be created to feed data into those models.
The IBM Deep Learning as Service offering is essentially a managed service that provides developers with access to hundreds of AI models based on neural networks that IBM has developed. IBM also revealed that the core technology employed to create the Deep Learning Service will be made available as an open source project called Fabric for Deep Learning (FfDL).
IBM also announced an open source Model Asset eXchange (MAX) online service, through which developers can discover free machine learning models, and the Center for Open Source Data and AI Technologies (CODAIT), which expands the mission of its Spark Technology Center to include building such models.
Puri noted that via all these offerings and services, IBM is moving to substantially lower the barrier of entry for building AI applications. Longer term, IBM also plans to make use of event-driven architectures enabled by serverless computing frameworks to make such models available on demand, he said.
Puri added that AI models require DevOps processes because, over time, organizations will find a need to either update or change the models as new data becomes available. In addition, AI is giving rise to a DataOps discipline for managing data pipelines that needs to become integrated with a comprehensive approach to DevOps, he said.
Puri said IBM expects AI models soon will be infused into every application being built or legacy application that needs to be updated. Given the scale of the task at hand, organizations will need to consider carefully how internal and external data flows into the models, which require continual access to data to learn. At the same time, organizations also will need to focus on ensuring the quality of that data to prevent machine learning algorithms from coming to the wrong conclusions.
Adoption of DevOps processes has been spotty so far. But as organizations begin to incorporate AI into their applications, many will discover it’s simply not feasible to implement AI without a robust set of DevOps processes in place to support the models.