At the Open Compute Project (OCP) Global Summit conference, Cisco announced it developed an 800-gigabit switch that consumes significantly less power than the previous generation of its networking equipment.
Thomas Scheibe, vice president of product management for cloud networking for Cisco’s Nexus and ACI product line, said the throughput provided by the latest 7-nanometer iteration of Cisco One ASIC processors is primarily needed by organizations processing massive amounts of data to train artificial intelligence (AI) models either in a local data center or in the cloud. The challenge is that many of the organizations are looking to build AI models that process massive amounts of data while simultaneously reducing their IT infrastructure’s carbon footprint, he noted.
Cisco has been able to achieve that goal by continuing to invest in proprietary ASIC processors; they are at the core of its networking portfolio to improve data operations (DataOps), said Scheibe. Many IT teams are also looking to replace legacy network routers and switches that consume more power and reduce energy costs by as much as 77%, added Scheibe. In terms of climate impact, Cisco claimed its 8111-32EH switch can now provide 25.6Tbps at 15% of the power requirement. Cisco projected that switch will save about 10,000 kg CO2e per year compared to a 12.8Tbps switch. That equates to the greenhouse gas emissions generated by an average gasoline-powered passenger vehicle driving 26,155 miles, according to Cisco.
IT teams also have the option of deploying either Cisco’s network operating system or SONiC—Software for Open Networking in the Cloud—network operating system (NOS).
Cisco is also providing IT organizations with the option to configure its latest routers and switches to run at either 800, 400 or 100 Gigabits with an eye toward upgrading throughput sometime in the future.
In general, DataOps as an IT discipline is maturing as organizations realize they need to implement best practices to optimize data flows across their organization. Cisco is giving IT teams the option of employing a standard Ethernet fabric or an alternative fabric that increases throughput by predicting which packets need to be delivered to a specific location based on their attributes and the behavior of previous network traffic. As the volume of data that needs to be accessed by low-latency applications continues to expand, DataOps will become a more critical IT discipline. Historically, storage administrators tended to be responsible for data management, but as these applications continue to proliferate, a new class of DataOps engineers is emerging to optimize the flow of data across distributed computing environments.
In the longer term, it’s not clear how much carbon dioxide emissions are factoring into IT decisions these days, but there are organizations that have begun to track it as part of an effort to lower their carbon emissions. Most cloud service providers have also committed to being carbon neutral. Cisco is betting achieving that goal will require IT infrastructure upgrades at a time when the amount of data that needs to be processed will only continue to exponentially increase.