DevOps.com

  • Latest
    • Articles
    • Features
    • Most Read
    • News
    • News Releases
  • Topics
    • AI
    • Continuous Delivery
    • Continuous Testing
    • Cloud
    • Culture
    • DataOps
    • DevSecOps
    • Enterprise DevOps
    • Leadership Suite
    • DevOps Practice
    • ROELBOB
    • DevOps Toolbox
    • IT as Code
  • Videos/Podcasts
    • Techstrong.tv Podcast
    • Techstrong.tv Video Podcast
    • Techstrong.tv - Twitch
    • DevOps Unbound
  • Webinars
    • Upcoming
    • On-Demand Webinars
  • Library
  • Events
    • Upcoming Events
    • On-Demand Events
  • Sponsored Content
  • Related Sites
    • Techstrong Group
    • Container Journal
    • Security Boulevard
    • Techstrong Research
    • DevOps Chat
    • DevOps Dozen
    • DevOps TV
    • Techstrong TV
    • Techstrong.tv Podcast
    • Techstrong.tv Video Podcast
    • Techstrong.tv - Twitch
  • Media Kit
  • About
  • Sponsor
  • AI
  • Cloud
  • Continuous Delivery
  • Continuous Testing
  • DataOps
  • DevSecOps
  • DevOps Onramp
  • Platform Engineering
  • Low-Code/No-Code
  • IT as Code
  • More
    • Application Performance Management/Monitoring
    • Culture
    • Enterprise DevOps
    • ROELBOB

Home » Blogs » DevOps in the Cloud » Edge Computing: Saving the Cloud from Ephemeral Data

Edge Computing: Saving the Cloud from Ephemeral Data

Avatar photoBy: Don Dingee on January 30, 2019 1 Comment

Edge computing is rapidly gaining momentum. Last week’s Linux Foundation news on the launch of LF Edge is one more sign of accelerating growth at the edge. What’s driving this sudden intense interest? It’s more than just the sheer number of edge devices or mounting concerns around interoperability and security. Pushing compute resources closer to the edge can save the cloud and networks from waves of ephemeral data.

Recent Posts By Don Dingee
  • Harmonizing Real-Time on Edge Platforms
  • When Gateways Go Away with Cellular IoT
  • Before We Burn the Martech Silo
Avatar photo More from Don Dingee
Related Posts
  • Edge Computing: Saving the Cloud from Ephemeral Data
  • Simplify and Streamline Hybrid Cloud with DevOps
  • Cloud Native Computing Foundation Kicks Off Berlin Event with Five New International Members
    Related Categories
  • Blogs
  • DevOps in the Cloud
  • Features
    Related Topics
  • cloud
  • edge computing
  • fog computing
  • Internet of Things
  • network bandwidth
  • open source
Show more
Show less

The concept of edge computing certainly isn’t new. In years BC (“before connectivity”), there were plenty of edge computing boxes. These featured single-board computers designed to specifications such as VMEbus, STDbus, PC/104, CompactPCI, AdvancedTCA and many others. Processing happened at the edge as data arrived. Ethernet connections made administration and results-sharing easier, but very few architectures relied on shipping all the data into the network.

TechStrong Con 2023Sponsorships Available

Of course, there was one big exception: a business model pulling all the data into the network. Telecom providers aggregated traffic from the edge into their core networks. As data traffic joined voice traffic, core networks grew with monstrous bandwidth and increased services. Core networks interconnected. Data traffic rose even faster, pushing the network into a cloud architecture. In years AD (“after distribution”), everything from the edge is now connecting to the cloud.

Much of the increase in data traffic came from the shift to video content. Users expect higher resolution video programming with the same lag-free performance. Streaming networks and quality of service (QoS) implementations have made great strides in content delivery. 5G infrastructure rollouts in progress will keep mobile a strong downstream contender.

What about upstream applications such as the IoT? Nearly the same terminology applies there. Devices gather data on the edge and transmit it to the cloud through a gateway. There are two key differences with most IoT applications, however. Real-time data may be lost if the streaming connection into the cloud is interrupted. Also, the fallacy of low bandwidth does not work at scale if all the data is transmitted. At some point—perhaps ten thousand, a hundred thousand, or a million or more sensors—an aggregate bandwidth problem develops.

Just as in the BC age, it’s time to rethink shipping all that IoT data upstream, instead taking steps at the edge. Cisco Systems has studied data trends extensively, producing its white paper, “Cisco Global Cloud Index: Forecast and Methodology, 2016-2021.” The company points to the IoT ushering in the “yottabyte era” very soon. Cisco estimates that 220ZB of data was generated in 2016 and projects that by 2021, there will be nearly 850ZB from people, machines and things combined.

Now for the interesting part: Cisco also estimates that as much as 90 percent of that data is ephemeral, short-lived data not useful to save. Even if only 10 percent of data is useful, it will swamp projected data center traffic by an estimated factor of four in 2021. Here’s Figure 24 from the Cisco white paper:

Edge computing, or its close cousin fog computing, allows two things to happen: It can pre-process ingested data, ironing out protocols and format differences such that everything arriving at the cloud is immediately ready for algorithms. Any savings in the cloud from not shuffling data before computation are valuable, increasing processing and network bandwidth availability. Parts of an algorithm could easily be farmed out toward the edge, perhaps decimating or filtering data before shipping it upstream for heavy analysis.

A bigger use for edge computing may be using algorithms to avoid shipping ephemeral data upstream altogether. Using generative models at the edge could make a huge difference in identifying useful data. Most of us have heard that no human can sit and watch hours and hours of surveillance video from multiple sources looking for one moment of a problem developing. Consider another example with an EKG monitor attached to a smartphone: While the patient is at home, at rest and unstressed, the EKG is in a perfectly normal range as learned by the application. Doctors are alerted when readings slip out of normal range, with raw data transmitted for observation if required. A similar example is waiting for a motor to fail while monitoring its operation.

Both the LF Edge initiative and work happening in the OpenFog Consortium are creating specifications for edge computing. These initiatives are aimed at interoperable, secure architectures where cooperative processing mines data. If items of interest can be extracted efficiently from useful and ephemeral data at the edge, loads on the cloud and networks would be lighter. Edge computing will likely detect exceptions more quickly, bringing people in to look at what is going on faster. The result will be more scalable IoT applications with better chances of delivering value from all that data.

— Don Dingee

Filed Under: Blogs, DevOps in the Cloud, Features Tagged With: cloud, edge computing, fog computing, Internet of Things, network bandwidth, open source

« Is Your Organization ‘Fit for the Future’?
Fourth Annual DevOps Dozen Winners Announced »

Techstrong TV – Live

Click full-screen to enable volume control
Watch latest episodes and shows

Upcoming Webinars

Evolution of Transactional Databases
Monday, January 30, 2023 - 3:00 pm EST
Moving Beyond SBOMs to Secure the Software Supply Chain
Tuesday, January 31, 2023 - 11:00 am EST
Achieving Complete Visibility in IT Operations, Analytics, and Security
Wednesday, February 1, 2023 - 11:00 am EST

Sponsored Content

The Google Cloud DevOps Awards: Apply Now!

January 10, 2023 | Brenna Washington

Codenotary Extends Dynamic SBOM Reach to Serverless Computing Platforms

December 9, 2022 | Mike Vizard

Why a Low-Code Platform Should Have Pro-Code Capabilities

March 24, 2021 | Andrew Manby

AWS Well-Architected Framework Elevates Agility

December 17, 2020 | JT Giri

Practical Approaches to Long-Term Cloud-Native Security

December 5, 2019 | Chris Tozzi

Latest from DevOps.com

Stream Big, Think Bigger: Analyze Streaming Data at Scale
January 27, 2023 | Julia Brouillette
What’s Ahead for the Future of Data Streaming?
January 27, 2023 | Danica Fine
The Strategic Product Backlog: Lead, Follow, Watch and Explore
January 26, 2023 | Chad Sands
Atlassian Extends Automation Framework’s Reach
January 26, 2023 | Mike Vizard
Software Supply Chain Security Debt is Increasing: Here’s How To Pay It Off
January 26, 2023 | Bill Doerrfeld

TSTV Podcast

On-Demand Webinars

DevOps.com Webinar ReplaysDevOps.com Webinar Replays

GET THE TOP STORIES OF THE WEEK

Most Read on DevOps.com

What DevOps Needs to Know About ChatGPT
January 24, 2023 | John Willis
Microsoft Outage Outrage: Was it BGP or DNS?
January 25, 2023 | Richi Jennings
Five Great DevOps Job Opportunities
January 23, 2023 | Mike Vizard
Optimizing Cloud Costs for DevOps With AI-Assisted Orchestra...
January 24, 2023 | Marc Hornbeek
A DevSecOps Process for Node.js Projects
January 23, 2023 | Gilad David Maayan
  • Home
  • About DevOps.com
  • Meet our Authors
  • Write for DevOps.com
  • Media Kit
  • Sponsor Info
  • Copyright
  • TOS
  • Privacy Policy

Powered by Techstrong Group, Inc.

© 2023 ·Techstrong Group, Inc.All rights reserved.