NVIDIA has announced its intention to acquire Arm from SoftBank for $40 billion as part of an effort to create a juggernaut that will build processors optimized for artificial intelligence (AI) applications that span edge computing deployment to the cloud.
Under the terms of the proposed deal, Arm will operate as a subsidiary of NVIDIA and will continue to headquartered in Cambridge, UK. NVIDIA will pay to SoftBank a total of $21.5 billion in NVIDIA common stock and $12 billion in cash, which includes $2 billion payable at signing. SoftBank will retain a 10% stake in NVIDIA.
NVIDIA CEO Jensen Huang during a conference call also committed to licensing the software NVIDIA uses to make graphical processor units (GPUs) using the same licensing model Arm has employed to encourage manufacturing partners to create processors based on its designs. In theory, that approach could substantially lower the cost of GPUs once the deal closes.
However, because of all the regulatory approvals required, Huang said NVIDIA could not commit to a timeframe for the close of the deal. Previously, Huang noted it took NVIDIA a year to close the acquisition of Mellanox, a provider of storage and networking infrastructure.
In addition to providing processors that drive gaming and CAD/CAM applications, GPUs from NVIDIA are widely employed to train AI models more efficiently than x86 processors from Intel or AMD. Processors from Intel are used more often to run the inference engines for an AI model once it is deployed. With the acquisition of ARM, NVIDIA gains access to processor technology that is emerging as a rival architecture for deploying AI models on a wide range of server and mobile computing platforms. Currently, Arm is best known for providing processors for mobile computing devices such as smartphones, which eventually will also be infused with a wide variety of machine and deep learning algorithms.
In the meantime, many organizations continue to struggle with implementing AI. It typically takes a team of data scientists six months or more to build an AI model. Once that model is built, it then needs to be incorporated into an application as part of a DevOps workflow. In the meantime, business assumptions may change, rendering the AI model irrelevant. That requires the data science team to build a new AI model. In addition, as new data sources become available, AI models need to be updated. Most organizations are just now defining the processes for building and deploying those models, otherwise known as machine learning operations (MLOps), using many of the principles that were first defined by DevOps teams.
Once those MLOps processes are defined, of course, IT teams then need to meld MLOps and DevOps workflows to keep applications that are updated more frequently synchronized with the AI model on which those applications increasingly depend.
At this point, it’s clear every application will incorporate some level of AI. However, the speed at which AI might become pervasive is not likely to be all that fast anytime soon.