The rise of machine- and deep-learning algorithms is about to create some interesting data-gravity challenges for IT operations teams. Over most of the last decade IT operations teams have been caught up in a public- versus private-cloud computing debate that has mostly been species. The truth is that most organizations wind up employing both. But as IT organizations are just now getting used to that multi-cloud reality, the rise of machine- and deep-learning algorithms to create a new generation of advanced artificial intelligence (AI) applications are about to add some significant nuances to managing the DevOps pipeline.
New applications tend to be built and deployed where they can access the most relevant data. Most of the data employed by the average enterprise resides on-premises today. But it’s clear that the data being created in various types of cloud services is growing at a faster rate, thereby altering data gravity forces across the enterprise.
To get the most value out of that, developers are now building a new generation of advanced applications that employ machine- and deep-learning algorithms. In most cases those algorithms are being deployed wherever data happens to reside. For example, Salesforce just announced at its Salesforce TRAILHEADX 2017 conference that it is making available additional APIs to the Salesforce Einstein Platform Services for building custom applications that can now be infused with sentiment analysis, the ability to detect objects and understand customer intent.
Michael Machado, director of product management for Salesforce Einstein, says that the success of any AI initiative will come down to both the amount and quality of the data being accessed. As a CRM application consisting of a mix of structured and unstructured data concerning customers, Salesforce represents an attractive platform for building AI applications. In fact, to facilitate the building of applications, Salesforce also is making it simpler for developers to collaborate across multiple alliances in addition to providing access to DevOps tools from Atlassian and Github.
But CRM applications are only one element of the typical application portfolio. The more data that any set of algorithms can be applied against, the faster the application learns and the more accurate it becomes. As developers build the next generation of applications, they are going to want to be able to access as much data as possible. For many IT organizations, that will mean the applications will be hybrid almost by definition—attempting to centralize all their data will prove to be impractical as well as prohibitively expensive. Constructing application pipelines spanning multiple data sources, however, requires a very mature set of DevOps processes.
Arguably, the rise of these next-generation applications might finally force the DevOps issue inside most organizations. Most have been able to manage application development on various cloud platforms in isolation. Going forward, that may no longer be practical, as developers seek to incorporate more data sources within their application environment. The challenge IT operations teams will need to wrestle with is providing timely access to that data in way that doesn’t ultimately compromise their performance.
— Mike Vizard