News

Heat Cooks Twitter DC | AI Will Kill All Humans | Patreon Layoffs, CSAM Claim

Welcome to The Long View—where we peruse the news of the week and strip it to the essentials. Let’s work out what really matters.

This week: Twitter is in a “non-redundant state” thanks to a hot summer, AI is likely to eliminate us, and Patreon fires 80 staff amid nasty allegations (TW: Child safety).

1. Twitter Teetering on Edge as Sacramento Sizzles

First up this week: Sacramento’s extreme heat kills a “key” Twitter data center. An internal company decree has suspended CD and frozen all nonessential DevOps changes.

Analysis: And nothing of value was lost

Joking aside, unlikely events can still happen. This is why geographic redundancy is so important—as is testing to ensure you can recover from a failed DC or two.

Donie O’Sullivan, Brian Fung and Sean Lyngaas got the memo: Extreme California heat knocks key Twitter data center offline

Prohibit non-critical updates
A company executive warned in an internal memo … that another outage elsewhere could result in the service going dark. … As a result of the outage in Sacramento, Twitter is in a “non-redundant state.”

The memo goes on to prohibit non-critical updates to Twitter’s product until the company can fully restore its Sacramento data center services: “All production changes, including deployments and releases to mobile platforms, are blocked.”

The restrictions highlight the apparent fragility of some of Twitter’s most fundamental systems, a problem Peiter “Mudge” Zatko, Twitter’s former head of security who turned whistleblower, had raised: … “Even a temporary but overlapping outage of a small number of datacenters would likely result in the service going offline for weeks, months, or permanently.”


Heat, in a Sacramento summer? What did they expect? MachineShedFred said:

At some point the probability of an event gets low enough that it’s not worth the extra cost to harden against it, especially if you have distributed systems in other geographical locations that can take the load for a bit. … Competent organizations have geographically-diverse locations … disaster recovery and business continuity plans. [And they] test their DR and BC plans.

I am not accusing Twitter of being competent, but in this scenario they have shown themselves to be not-incompetent. [But] there is a reason or three why Facebook, Apple, Amazon, Twitter, et. al. have been building data centers in Oregon instead of California.


Trouble_007 is shaken and stirred: [You’re fired—Ed.]

When will the Antarctic data centers open? … Given the Antarctic is fairly geo-politically stable, I can see all the benefits of a combined data-centre/research-station site.


Elon to the rescue! idji explains:

Most Starlinks are … placed in orbits inclined at 53 degrees to the Equator. This covers all of the world between the Arctic and Antarctic Circles. To be able to provide coverage to the Poles … they launch some … in polar orbits at 97.7 degrees. … There are [also] some Starlinks on 70 degree orbits.

The 53 degree Starlinks travel over populated countries and talk with ground stations in the supported countries. The polar satellites use lasers to communicate with 53 degree satellites that have laser and ground station connectivity.


2. ASI Will Kill All Humans

Artificial special intelligence will kill us all. That’s the conclusion of an Oxford University research paper by three storied AI researchers.

Analysis: It’s “likely” we’re going to die

What if a super AI was motivated to succeed? And what if it needed more energy to succeed? And what if that meant diverting energy from humans and preventing us from hitting the Emergency Stop button? There are a lot of what-ifs here, but such is the nature of academic discourse.

Edward Ongweso Jr: AI Will Eliminate Humanity

Focusing on existential risk
Artificial intelligence could pose an existential risk to humanity [because of] how reward systems might be artificially constructed. … Researchers from the University of Oxford and [Australian National University] have now concluded that it’s “likely” … a superintelligent AI could break bad and take out humanity.

Since AI in the future could take on any number of forms and implement different designs, the paper imagines scenarios … where an advanced program could intervene to get its reward without achieving its goal. For example, an AI may want to “eliminate potential threats” and “use all available energy” to secure control over its reward.

There are a host of assumptions that have to be made for this anti-social vision to make sense. [But] algorithms have already transformed racist policing into “predictive policing” that justifies surveillance and brutality reserved for racial minorities as necessary. … Focusing on existential risk … asks us to think carefully about how these systems are designed and the negative effects they have.


One of the authors is Michael Cohen:

AIs intervening in the provision of their rewards would have consequences that are very bad. … Our conclusion is much stronger than that of any previous publication—an existential catastrophe is not just possible, but likely. … Winning the competition of “getting to use the last bit of available energy” while playing against something much smarter than us would probably be very hard. Losing would be fatal.

An ASI acting over the long term also cares about future rewards. … It can increase [them] by, e.g., using all available energy to redirect asteroids, block cosmic rays, stop any human weapons from being deployed against it, etc. … For us, the type of nature of those perceptions that we consider relevant to working out our goals are very complex and very varied [and] unclear, so it’s hard to see how we could encode a similar process in an agent.


But surely we’d see this coming? fuzzyfuzzyfungus finks not:

A scenario where one takes on additional functions gradually would both attract less attention. … If, say, a corporation gradually sourced more of its sociopathic self-interest from an overgrown ERP system and less from its managers [and] just had people gradually accepting more decisions from ‘the system’ as the quality of its advice grows and former decision makers quietly slacking off as they … just rubber stamp automatically generated plans and then break for golf, there would never really be a “today I pressed the ‘unshackle the overmind for massive profits!’ button” moment.

The outside world wouldn’t have any particular reason to notice. … Even people on the inside would probably … not really think about the change: The bot would still be optimizing along the same lines management was always intended to optimize along.


3. Patreon CSAM Allegation

Patreon, the notorious membership monetization platform that’s still doing business in Russia, has fired even more staff. Not content with canceling its internal security team—apparently in favor of outsourcing it—the firm is losing another 80 employees.

Analysis: And then the third shoe dropped

Allegations have resurfaced that Patreon ignores reports of child sexual abuse material (CSAM) being monetized via its platform. Concerns raised by an ex-employee and corroborated by people who tried very hard to report such abusive, vile filth are this week exercising many.

Amanda Silberling: Patreon lays off 17% of staff

Ongoing tidal wave
The creator subscription company Patreon is the latest company to be affected by a long line of tech layoffs. … This affects the Go-to-Market, Operations, Finance and People teams. Patreon will also close its Berlin office [and] the Dublin office. … The nine engineers working in Dublin will be offered relocation packages.

Just last week, Patreon also let go of five members of its security team. … CEO Jack Conte wrote in a letter to staff … that those layoffs “stemmed from a different set of reasons.”

When [I] spoke to Patreon chief product officer Julian Gutman in December, he said that the company planned to double in size by the end of 2022 — but evidently, those plans have shifted. … It could be a bad sign, though, that creator economy companies are not immune to this ongoing tidal wave of tech industry layoffs. Layoffs have also affected Lightricks, StreamElements … Jellysmack … Snap and ByteDance, TikTok’s parent company.


And that’s not all. Here’s Janet Douglas:

A former trust and safety specialist at Patreon took to … GlassDoor to allege that Patreon had demonstrated negligence with regards to child safeguarding. [They] claimed platform management had been actively encouraging safety staff to overlook pedophilic content: … “We are being told specifically by management and executives NOT to take down content that is illegal or was reported as sexual in nature involving minors. … When others try to inform management that there’s an amalgamation of accounts that are selling lewd photographs of what appear to be children, all concerns are dismissed.”

Parenting lifestyle and child safeguarding commentator Sarah Adams, known on [TikTok] as mom.uncharted, made a video … discussing her experience attempting to inform Patreon about potentially illegal content. Adams had first been alerted to an Instagram account featuring sexualized photos of what she says appeared to be a young girl. … The account also had a linked Patreon with over 2,000 donors.

Patreon claims the statement made on GlassDoor was “unequivocally false” and labels it “disinformation” and a “conspiracy.” [But] it does not address its extreme delay in action on the account reported by Sarah Adams [that] it initially defended and left active.


The Moral of the Story:
Misery acquaints a man with strange bedfellows


You have been reading The Long View by Richi Jennings. You can contact him at @RiCHi or tlv@richi.uk.

Image: Mingrui He (via Unsplash; leveled and cropped)

Richi Jennings

Richi Jennings is a foolish independent industry analyst, editor, and content strategist. A former developer and marketer, he’s also written or edited for Computerworld, Microsoft, Cisco, Micro Focus, HashiCorp, Ferris Research, Osterman Research, Orthogonal Thinking, Native Trust, Elgan Media, Petri, Cyren, Agari, Webroot, HP, HPE, NetApp on Forbes and CIO.com. Bizarrely, his ridiculous work has even won awards from the American Society of Business Publication Editors, ABM/Jesse H. Neal, and B2B Magazine.

Recent Posts

GitLab Adds AI Chat Interface to Increase DevOps Productivity

GitLab Duo Chat is a natural language interface which helps generate code, create tests and access code summarizations.

3 hours ago

The Role of AI in Securing Software and Data Supply Chains

Expect attacks on the open source software supply chain to accelerate, with attackers automating attacks in common open source software…

8 hours ago

Exploring Low/No-Code Platforms, GenAI, Copilots and Code Generators

The emergence of low/no-code platforms is challenging traditional notions of coding expertise. Gone are the days when coding was an…

1 day ago

Datadog DevSecOps Report Shines Spotlight on Java Security Issues

Datadog today published a State of DevSecOps report that finds 90% of Java services running in a production environment are…

2 days ago

OpenSSF warns of Open Source Social Engineering Threats

Linux dodged a bullet. If the XZ exploit had gone undiscovered for only a few more weeks, millions of Linux…

2 days ago

Auto Reply

We're going to send email messages that say, "Hope this finds you in a well" and see if anybody notices.

2 days ago