Welcome to The Long View—where we peruse the news of the week and strip it to the essentials. Let’s work out what really matters.
This week: Google’s FLoC proposal is dead, Meta/Facebook is buying RSC—a huge AI supercomputer, and Arm “will IPO” instead of selling to Nvidia.
1. Forget ‘Federated Learning of Cohorts’—Long Live ‘Topics’
First up this week: Google bows to the inevitable. Third-party cookies will soon go away, because people are fed up with being tracked. But Google’s FLoC proposal wasn’t the answer.
Analysis: FLoC is a flop
Google doesn’t want to invade your privacy. But Google does want to sell well-targeted ads. The key difference between FLoC and Topics is that FLoC buckets were automatically generated—and hence opaque, risking overlap with “protected categories”—e.g., race or gender. But the topic classifications in the Topics API are curated and have user-visible, descriptive names.
Frederic Lardinois: Google kills off FLoC, replaces it with Topics
Google’s controversial project for replacing cookies for interest-based advertising … is dead. In its place, Google today announced a new proposal. … Your browser will learn about your interests as you move around the web.
…
When you hit upon a site that supports the Topics API for ad purposes, the browser will share three topics you are interested in. … The site can then share this with its advertising partners to decide which ads to show you. … Users will be able to review and remove topics from their lists [or] turn off the entire Topics API.
…
It’ll remain to be seen if other browser vendors will be interested in adding the Topics API. [But] they all quickly turned a cold shoulder to FLoC. … The plan is to start trialing the Topics API at the end of this quarter.
A minor tweak and rename? “Yes and no,” equivocates mananaysiempre:
Yes, it still amounts to building user-tracking functionality into user agents. No, it’s not the same approach.
…
FLoC was doing unsupervised clustering of users, whereas Topics allocates users into predetermined clusters. This makes it potentially more transparent and also seems designed to work around the “protected categories” objection.
But Google has a long way to go to convince garett_spencley:
I’m glad FLoC is being ditched, but as an end-user I’m still not asking for this Topics nonsense. It doesn’t solve any problem that I have. It solves problems that the advertising industry has.
…
I’ll choose to use browsers that don’t support this.
2. Machine Learning at Metaverse Scale is Expensive
Zuckerberg’s new toy is RSC: A $400 million Nvidia-based AI supercomputer. And that’s just Phase One.
Analysis: 3.8×1018 integer operations per second is … a lot
If you rely on Open Compute Project systems for your DevOps, know that it’s no longer the bee’s knees over at Hacker Way. Meta/Facebook is investing eye-watering sums into a custom configuration of off-the-shelf hardware.
Timothy Prickett Morgan: Meta Buys, Rather Than Builds And Opens, Its Massive AI Supercomputer
If you thought it took a lot of compute and storage to build Facebook’s social network, you ain’t seen nothing yet. … To build the Metaverse … is going to take an absolutely enormous amount of supercomputing power.
…
But what is a surprise, given Facebook’s more than decade long focus on … opening up its hardware designs through the Open Compute Project, is that the … Research Super Computer [RSC] system is built from … commercially available … servers, storage, and switches. [It] probably comes down to … working a deal with some vendors who want the Facebook publicity and are willing to work the price and the parts availability.
…
[So Meta bought] 760 DGX A100 systems from Nvidia—which each … have a pair of AMD “Rome” 64-core Epyc 7742 processors … 1 TB of main memory … eight A100 GPU accelerators [and] eight 200 Gb/sec HDR InfiniBand controllers—all-flash storage from Pure Storage, and … caching servers from Penguin Computing. … At list price this machine would cost around $400 million … Phase one of the RSC machine is rated at 59 petaFLOPS … (FP64), and 118.6 petaFLOPS FP64 using the Tensor Core matrix engines. … With INT8 processing on the Tensor Cores, which is needed for AI inference, the phase one machine is rated at 3.79 exaOPS.
How much??? It doesn’t seem like news to kleiba:
I used to work at a university where my professor had been in automatic speech recognition for a long time, but basically gave up on that line of research about 10 years ago because he figured that universities simply cannot compete budget wise with the big industry players. I suppose the same will soon be true for most ML-related areas of research sooner or later.
…
Already, a substantial amount of research innovation in [Natural Language Processing] and [Computer Vision] has been coming from big companies in recent years. Of course there is a discussion to be had about what that means for society at large.
But 3.79 exoOPS? That’s the way the wind is blowing, according to sungazer:
Give it 15 or so years—this level of computation will be on your desktop. If desktops still exist.
…
Or desks.
3. Arm Bid: No Legs
Speaking of Nvidia, as I’ve said before, the Arm merger really shouldn’t go through. And it’s looking less and less likely. ARM chips are an increasing fixture in the data center—especially those that value “performance per Watt.”
Analysis: ARM M&A DOA—IPO PDQ
A veritable hornets’ nest of Bloomberg journalists have put their names to an anonymously-sourced report. The story argues that the proposed Nvidia buyout of Arm from SoftBank Group is dead. Next stop: IPO.
Ian King, Giles Turner, Peter Elstrom, Dina Bass, David McLaughlin, Ruth David and Dinesh Nair: Nvidia Quietly Prepares to Abandon … Arm Bid
According to people familiar with the matter … Nvidia … doesn’t expect the transaction to close. … SoftBank, meanwhile, is stepping up preparations for an Arm initial public offering.
…
If Nvidia manages to get the deal over the line, it would be a massive coup. … But it will be an uphill fight. … The world’s biggest tech companies rely on Arm technology, and they fear they could lose unfettered access under Nvidia.
…
Within SoftBank, there are factions. [And] the ordeal has created divisions within Nvidia. … “We remain hopeful that the transaction will be approved,” a SoftBank spokesperson said in an emailed statement.
Post-Brexit Britain would no doubt like to keep Arm independent. Here’s simonh:
Arm has I think ⅔ of its employees in the UK—mainly Cambridge—and they’re expanding the campus there. The Semiconductor Physics Group at the Cavendish Laboratory at the University is at the cutting edge globally with 18 PhD projects currently.
…
Guess where scads of Arm’s top scientists are recruited from? Arm and the University are joined at the hip. That company’s not going anywhere.
More broadly, it’s good news, thinks hatchet:
Hopefully this means NVIDIA will focus on RISC-V instead. This will be good for everyone.
The Moral of the Story: Zeal should not outrun discretion.
You have been reading The Long View by Richi Jennings. You can contact him at @RiCHi or [email protected].
Image: Reed Geiger (via Unsplash)