DevOps.com

  • Latest
    • Articles
    • Features
    • Most Read
    • News
    • News Releases
  • Topics
    • AI
    • Continuous Delivery
    • Continuous Testing
    • Cloud
    • Culture
    • DataOps
    • DevSecOps
    • Enterprise DevOps
    • Leadership Suite
    • DevOps Practice
    • ROELBOB
    • DevOps Toolbox
    • IT as Code
  • Videos/Podcasts
    • Techstrong.tv Podcast
    • Techstrong.tv Video Podcast
    • Techstrong.tv - Twitch
    • DevOps Unbound
  • Webinars
    • Upcoming
    • On-Demand Webinars
  • Library
  • Events
    • Upcoming Events
    • On-Demand Events
  • Sponsored Content
  • Related Sites
    • Techstrong Group
    • Container Journal
    • Security Boulevard
    • Techstrong Research
    • DevOps Chat
    • DevOps Dozen
    • DevOps TV
    • Techstrong TV
    • Techstrong.tv Podcast
    • Techstrong.tv Video Podcast
    • Techstrong.tv - Twitch
  • Media Kit
  • About
  • Sponsor
  • AI
  • Cloud
  • Continuous Delivery
  • Continuous Testing
  • DataOps
  • DevSecOps
  • DevOps Onramp
  • Platform Engineering
  • Low-Code/No-Code
  • IT as Code
  • More
    • Application Performance Management/Monitoring
    • Culture
    • Enterprise DevOps
    • ROELBOB

Home » News » DevOps, Drought and Climate | Meta❤️PTP

DevOps, Drought and Climate | Meta❤️PTP

By: Richi Jennings on November 21, 2022 Leave a Comment

Welcome to The Long View—where we peruse the news of the week and strip it to the essentials. Let’s work out what really matters.

This week: Data centers cause climate change, and Meta is rolling out Precision Time Protocol.

TechStrong Con 2023Sponsorships Available

1. Doin’ DevOps Warms the Planet

First up this week: Data centers are making drought conditions worse. So says a Virginia Tech prof.

Analysis: Cheap chillers waste water

We all know that energy-hungry infrastructure is a worry. But climate change is causing droughts, which are becoming a problem for data centers in water-stressed locations. One key is to use chiller designs that don’t evaporate water, but the irony is that those chillers use more energy.

Diana Olick: Microsoft, Meta and others face rising drought risk to their data centers

“In ‘severe’ drought”
Drought conditions are worsening in the U.S., and that is having an outsized impact on … data centers [which] generate massive amounts of heat. … Water is the cheapest and most common method used to cool [them].
…
In just one day, the average data center could use 300,000 gallons of water to cool itself — the same water consumption as 100,000 homes, according to researchers at Virginia Tech. … Realizing the water risk in New Mexico, Meta … ran a pilot program on its Los Lunas data center to reduce relative humidity from 20% to 13%, lowering water consumption. It has since implemented this in all of its [data] centers.
…
Just over half … of the nation is in drought conditions, and over 60% of the lower 48 states. … That is a 9% increase from just one month ago. Much of the west and Midwest is in ‘severe’ drought.

David Lumb: Internet Outages Could Spread as Temperatures Rise

“Water is projected to get scarcer”
2022 is expected to be the sixth-hottest year on record as average temperatures reached 1.57 degrees Celsius above the 20th century average. We’re on track to normalize that temperature gain every year. … And it could get worse.
…
As our world warms up, power outages and water shortages have ravaged many parts of the planet. Data centers may be among the first to feel the … pinch. They need lots of energy to keep their servers powered, air conditioning and often water to cool the servers. … As climate change threatens energy availability, Big Tech has engaged more sustainable strategies. These include shifting more of their energy reliance to renewables like solar and wind … recycling more water and tinkering with other cooling options.
…
As one-fifth of the data centers in the country get their water from moderately to highly stressed regions supplying water … US cities are already getting nervous. [And] water is projected to get scarcer. But droughts are hard to evade when you also need to be as close as possible to customers you’re serving.


Horse’s mouth? Virginia Tech Assistant Professor Landon Marston:

It takes a massive amount of water to produce the electricity needed, which means that data centers indirectly use a lot of water through their large electricity demand. … When locating new data centers … environmental considerations should be included in the discussion alongside infrastructure, regulatory, workforce, client proximity, and tax considerations.


How are these data centers using water, exactly? aaarrrgggh explaaaiiinnns:

Water use is from evaporation in cooling towers, plus blow-down needed to reduce suspended solids in condenser water as water evaporates. There are technologies available to reduce blow-down some … but it has an energy penalty.

You can also design systems so that you only use evaporation mode cooling when outside temperatures are over ~100F, and use a dry-cooling mode the rest of the time. Both changes increase electricity use to reduce water use.


Sounds like it’s about money. Ohhh, u/E_Snap:

Same thing that almond and avocado farmers have done in California. They’ve convinced the state that giving them as much free water as they can take is more important than flushing your toilet at home.


But BeepBoopBeep’s idea ignores latency:

I have no clue why they don’t build data enters in the Midwest—with the largest source of water on the planet—and just run the water through external free cooling when the winters cool down the water for free. As long as the water source is not polluted and re-introduced into the original source for water, it’s fine.


2. Precision Time Protocol at Meta — Why?

Precision Time Protocol (PTP) is a telecoms thing, right? Why does Meta care about rolling it out to all its infrastructure? Surely good old NTP is accurate enough?

Analysis: Eventual consistency needs accurate time for perf

It turns out that Meta gets way better performance by decreasing the difference between nodes’ clocks. Waiting for database consistency requires padding the pause for expected time inaccuracy. Improving accuracy has a huge effect on perf.

Sebastian Moss: Meta to deploy new network timing protocol

“Uses hardware timestamping and transparent clocks”
While Network Time Protocol (NTP) allows for precision within milliseconds, PTP allows for precision within nanoseconds. … Servers need to keep accurate and coordinated time.
…
PTP was actually first deployed in 2002. … A Stratum network computer holds the current time and sends a time reference to any other computer on a network that asks what time it is, via a network data packet. [But] latency impacted the speed at which systems could be informed of the time.
…
PTP uses hardware timestamping and transparent clocks to improve consistency and symmetry, respectively. … PTP is already pushed by the telecom industry as networks transition to 5G connectivity, as its added precision and accuracy is necessary for higher bandwith 5G.


Meta’s Oleg Obleukhov and Ahmad Byagowi rent the curtain asunder:

[PTP] allows us to synchronize the systems that drive our products and services down to nanosecond precision. PTP’s predecessor, Network Time Protocol (NTP), provided us with millisecond precision, but as we scale … we need to ensure that our servers are keeping time as accurately and precisely as possible … for everyone, across time zones and around the world.
…
Imagine a situation in which a client writes data and immediately tries to read it. In large distributed systems, chances are high that the write and the read will land on different back-end nodes. … Adding precise and reliable timestamps on a back end and replicas allows us to simply wait until the replica catches up. … One could argue that we don’t really need PTP for that. NTP will do just fine. … But experiments we ran comparing our state-of-the-art NTP implementation and an early version of PTP showed a roughly 100x performance difference.
…
There are several additional use cases, including event tracing, cache invalidation, privacy violation detection improvements, latency compensation … and simultaneous execution in AI, many of which will greatly reduce hardware capacity requirements. This will keep us busy for years ahead.


In other news, the world has voted to stop adding leap seconds. Which is giving hoytech flashbacks:

In 2015 I was working at a “fintech” company and a leap second was announced. … When the previous leap second was applied, a bunch of our Linux servers had kernel panics for some reason, so needless to say everyone was really concerned about a leap second happening during trading.
…
I spent a month in the lab, simulating the leap second by fast forwarding clocks for all our different applications, testing different NTP implementations. … I had heaps of meetings with our partners trying to figure out what their plans were … and test what would happen if their clocks went backwards. I had to learn about how to install the leap seconds file into a bunch of software I never even knew existed, write various recovery scripts, and at one point was knee-deep in ntpd and Solaris kernel code.
…
The day before it was scheduled, the whole trading world agreed to halt the markets for 15 minutes before/after the leap second, so all my work was for nothing. I’m not sure what the moral is here.


The Moral of the Story:
Life is not a problem to be solved, but a reality to be experienced

—Søren Kierkegaard

You have been reading The Long View by Richi Jennings. You can contact him at @RiCHi or [email protected].

Image: Jovan Vasiljević (via Unsplash; leveled and cropped)

Recent Posts By Richi Jennings
  • Microsoft Outage Outrage: Was it BGP or DNS?
  • 8-Bit Floating Point for AI/ML? | Amazon and Microsoft Shed Tech Jobs
  • FAA Ground Stop due to Technical Debt? | Don’t Do DIY Crypto!
More from Richi Jennings
Related Posts
  • DevOps, Drought and Climate | Meta❤️PTP
  • DevOps > Cloud, SaaS, SDDC, and apple pie
  • Simplify and Streamline Hybrid Cloud with DevOps
    Related Categories
  • AI
  • API
  • Application Performance Management/Monitoring
  • Blogs
  • Business of DevOps
  • CloudOps
  • DevOps and Open Technologies
  • DevOps Culture
  • DevOps in the Cloud
  • DevOps Practice
  • DevOps Toolbox
  • Editorial Calendar
  • Features
  • Infrastructure as Code
  • Most Read
  • News
  • SaaS
    Related Topics
  • climate change
  • cloud database
  • consistency
  • Green data center
  • green tech
  • Life is not a problem to be solved but a reality to be experienced
  • Meta
  • Meta Platforms
  • Precision Time Protocol (PTP)
  • The Long View
Show more
Show less

Filed Under: AI, API, Application Performance Management/Monitoring, Blogs, Business of DevOps, CloudOps, DevOps and Open Technologies, DevOps Culture, DevOps in the Cloud, DevOps Practice, DevOps Toolbox, Editorial Calendar, Features, Infrastructure as Code, Most Read, News, SaaS Tagged With: climate change, cloud database, consistency, Green data center, green tech, Life is not a problem to be solved but a reality to be experienced, Meta, Meta Platforms, Precision Time Protocol (PTP), The Long View

« Culture a Stumbling Block to DevOps, DevSecOps
VMware Backup Guide for Hyper-V Admins »

Techstrong TV – Live

Click full-screen to enable volume control
Watch latest episodes and shows

Upcoming Webinars

Evolution of Transactional Databases
Monday, January 30, 2023 - 3:00 pm EST
Moving Beyond SBOMs to Secure the Software Supply Chain
Tuesday, January 31, 2023 - 11:00 am EST
Achieving Complete Visibility in IT Operations, Analytics, and Security
Wednesday, February 1, 2023 - 11:00 am EST

Sponsored Content

The Google Cloud DevOps Awards: Apply Now!

January 10, 2023 | Brenna Washington

Codenotary Extends Dynamic SBOM Reach to Serverless Computing Platforms

December 9, 2022 | Mike Vizard

Why a Low-Code Platform Should Have Pro-Code Capabilities

March 24, 2021 | Andrew Manby

AWS Well-Architected Framework Elevates Agility

December 17, 2020 | JT Giri

Practical Approaches to Long-Term Cloud-Native Security

December 5, 2019 | Chris Tozzi

Latest from DevOps.com

Stream Big, Think Bigger: Analyze Streaming Data at Scale
January 27, 2023 | Julia Brouillette
What’s Ahead for the Future of Data Streaming?
January 27, 2023 | Danica Fine
The Strategic Product Backlog: Lead, Follow, Watch and Explore
January 26, 2023 | Chad Sands
Atlassian Extends Automation Framework’s Reach
January 26, 2023 | Mike Vizard
Software Supply Chain Security Debt is Increasing: Here’s How To Pay It Off
January 26, 2023 | Bill Doerrfeld

TSTV Podcast

On-Demand Webinars

DevOps.com Webinar ReplaysDevOps.com Webinar Replays

GET THE TOP STORIES OF THE WEEK

Most Read on DevOps.com

What DevOps Needs to Know About ChatGPT
January 24, 2023 | John Willis
Microsoft Outage Outrage: Was it BGP or DNS?
January 25, 2023 | Richi Jennings
Five Great DevOps Job Opportunities
January 23, 2023 | Mike Vizard
Optimizing Cloud Costs for DevOps With AI-Assisted Orchestra...
January 24, 2023 | Marc Hornbeek
Dynatrace Survey Surfaces State of DevOps in the Enterprise
January 24, 2023 | Mike Vizard
  • Home
  • About DevOps.com
  • Meet our Authors
  • Write for DevOps.com
  • Media Kit
  • Sponsor Info
  • Copyright
  • TOS
  • Privacy Policy

Powered by Techstrong Group, Inc.

© 2023 ·Techstrong Group, Inc.All rights reserved.