Actifio and IBM have formed an alliance under which IBM will make available data management software developed by Actifio under its own name as part of an effort to close the gap between DevOps and DataOps.
At the same time, Actifio will make available a software-as-a-service (SaaS) offering for the first time, dubbed Actifio Go, which promises to make it easier to manage and move data across multiple clouds.
Actifio CEO Ash Ashutosh said with the rise of both DevOps and DataOps there is now a unique opportunity to unify the management of pipelines across the enterprise. While storage administrators have been creating data pipelines for databases and applications for decades, the pipelines developers create to consume that data have been developed and managed separately. Now there is a nascent DataOps effort based on a set of best practices for automating the management of data that borrows many of the same principles being advanced by DevOps proponents for the building and deploying of applications.
Actifio’s core Virtual Data Pipeline technology advances DataOps, Ashutosh said, by making it easier to copy a virtual instance of a data set running in a production environment and update it with the most recent copy of that data at any point in time. That capability is not only being employed to, for example, copy data into the cloud, but also to make data readily accessible to DevOps teams as needed on a self-service basis, he said, adding the goal is to make it easier for developers to dynamically consume the latest version of any data set as needed, with a minimal amount of intervention required on the part of a storage administrator.
IBM plans to make an instance of Virtual Data Pipeline available as part of the IBM InfoSphere portfolio of data management software it provides. That agreement extends Actfio’s existing reseller relationship with IBM.
Ashutosh conceded there is much work to be done before DataOps becomes a natural extension of DevOps. Most storage administrators today are not all that familiar with how the data pipelines they create are consumed by applications. Most storage administrators continue to think in terms of volumes and files rather than in terms of managing data pipelines, he said.
But as those application environments become more dynamic thanks to the rise of microservices, pressure to modernize data management will only increase, Ashutosh noted. For example, more DevOps teams will be integrating the ability to copy data directly into their continuous integration/continuous delivery (CI/CD) platforms. Achieving that goal will be easier if the software employed to manage data within the context of a DevOps processes is invoked as a SaaS application, he said.
Most IT organizations are a very long way from melding DevOps and DataOps. Most are still struggling with mastering the fundamentals of DevOps. But the convergence of DevOps and DataOps is all but inevitable. The only thing that remains to be seen is the degree to which IT organizations will proactively seek to achieve that goal versus waiting for developers to force the issue.