GigaSpaces has added a DataOps framework to its in-memory computing platform that provides an alternative to Kubernetes by employing Nomad software from HashiCorp as a scheduler.
Yoav Einav, vice president of product for GigaSpaces, said many enterprise IT organizations aren’t in a position to adopt Kubernetes, yet they still need a means to embrace best DataOps practices to process massive amounts of data in real-time.
Version 15.2 of the company’s namesake platform adds a GigaOps Stack module based on ElasticGrid, an alternative to Kubernetes; and Ops Manager; gsctl; and a monitoring tool. GigaSpaces already natively supports Kubernetes, but Einav said there is a significant number of IT teams that don’t have the skills required to deploy and manage Kubernetes clusters.
DataOps has emerged as a corollary to DevOps in that it defines a set of practices for automating the provisioning of storage systems that makes data readily accessible to applications. The need to automate those processes has become more acute as organizations build and deploy applications infused with machine learning algorithms, noted Einav.
ElasticGrid is designed to provide a more declarative alternative to achieve that goal without having to learn how to programmatically spin up Kubernetes clusters, Einav said. That approach makes ElasticGrid more accessible to the average IT administrator, he added.
In addition to being able to support both virtual machines and containers, Einav said ElasticGrid is approximately 20% faster than Kubernetes.
Other capabilities added to the latest release of GigaSpaces include a stream iterator optimization tool that increases throughput for read-intensive workloads by a factor of two and support for memory-level encryption and compression per property to better adhere to data privacy regulations.
GigaSpaces is making the case for an in-memory computing platform that is optimized to run applications in near real-time. As organizations embrace a wide range of digital business applications, legacy IT infrastructure that was optimized to process applications in batch mode needs to be updated to support applications that need to process data in memory to support applications that are processing data in near real-time.
It’s not clear to what degree organizations will be investing in new platforms during the economic downturn brought on by the COVID-19 pandemic. There is no doubt some organizations will pull back on those investments. However, a significant number of organizations are also accelerating the rate at which they are building digital business applications to drive a newly minted business continuity strategy. Recognizing the need for more resilient applications that can be accessed by customers and suppliers from anywhere, many organizations are accelerating the deadlines for delivering many of these applications in the wake of the pandemic.
Of course, there are multiple options available when it comes to deploying an in-memory computing platform to support these applications. The challenge many organizations will encounter, however, is finding an in-memory computing platform that also addresses the inherent challenges associated with moving large amounts of data in and out of the platform on-demand.