AT&T and India IT outsourcing company Tech Mahindra announced that code the companies jointly developed to drive artificial intelligence (AI) applications is being contributed to the Linux Foundation.
Mazin Gilbert, vice president of advanced technology for AT&T Labs, says the goal of Acumos is to not only democratize access to AI software, but also consume it as a microservice with a standard set of DevOps processes.
Gilbert says AT&T decided to contribute Acumos to the Linux Foundation because it’s become clear the proprietary implementations of AI platforms will prove to be too difficult and costly to support. AT&T is addressing this issue because it currently has 10 proprietary AI platforms attached to its network and was looking at having hundreds attached within a matter of months.
Making Acumos available as an open-source project should go a long way to reducing that complexity by providing a standard interface to integrating various AI platforms, says Gilbert. Acumos includes a standard mechanism for downloading AI code from an app store, bringing AI functionality to where data resides instead of requiring developers to load data into a proprietary cloud service to access AI functions. In the case of AT&T, that integration is accomplished via the company’s Indigo software-defined network (SDN).
Gilbert notes that much of what passes for advanced AI today has been around for decades. Many of the algorithms being used to drive AI applications are now over 50 years old. The only real differentiation is how these algorithms are being combined to drive applications to access enough data to make the algorithms useful. Because of that simple fact, AT&T doesn’t believe giving Acumos to the Linux Foundation represents any major loss of intellectual property.
As AI becomes more democratized, DevOps teams will manage massive amounts of distributed data pipelines. AI applications require access to massive amounts of data that must be consumed via a consistently applied information architecture. Sometimes referred to as DataOps, the pipelines created by operations teams enable the algorithms embedded in a developer’s application to continually learn about changes that impact the recommendations being surfaced by the AI engine.
At this point it’s now only a matter of time before every application is infused with some form of AI. In many cases, the front end to those AI capabilities will be a voice-enabled digital assistant trained to optimize a specific processor or answer a broad range of capabilities. Applications that don’t provide these capabilities soon will be deemed archaic. The result from a DevOps perspective should be a massive series of rolling upgrades to infuse AI capabilities into existing applications or replace them with one that provide those capabilities. Regardless of how that is accomplished, without some form of integrated DevOps processes in place there can be no data for AI applications to consume.
— Mike Vizard