Jump to: navigation, search

Edge Computing Group/Edge Reference Architectures

< Edge Computing Group
Revision as of 11:35, 16 October 2018 by Ildiko (talk | contribs) (User Stories)


Define a reference architecture for edge and far edge deployments including OpenStack services and other open source components as building blocks.


"The most mature view of edge computing is that it is offering application developers and service providers cloud computing capabilities, as well as an IT service environment at the edge of a network." - Cloud Edge Computing: Beyond the Data Center by the OSF Edge Computing Group

"We define Edge computing as an infrastructure deployment focused on reducing latency between an application and its consumer by increasing geographical proximity to the consumer." - Denver PTG (2018) definition

Overall Edge Architecture.png

Tiers of computing sites

The below table captures the discussions at the PTG and refers to the definitions we created earlier in collaboration with the OPNFV Edge Cloud Project as they are described in their whitepaper.

OpenStack Denver PTG (2018) OPNFV Edge Cloud Project
Regional Datacenter
  • A large centralized facility located within 100ms of consumers.
  • Typically one deployer will have less than 10 of these.
Large Edge
Edge Site
  • A smaller site located within less than 15-20ms to consumers.
  • One deployer could have hundreds of these.
Medium Edge
Far Edge Site/Cloudlet
  • A smaller site located within less than 10ms to consumers.
  • One deployer could have thousands of these.
Small Edge
Fog computing
  • Devices physically adjacent to consumer (typically within the same building) within 1-2ms of consumers.
  • One deployer could have tens of thousands of these.

User Stories

There isn't a one size fits all solution to infrastructure. One must select a design pattern which best addresses their needs.

The following patterns have been developed to address specific user stories in edge compute architecture. They assume the deployer has tens of regional datacenters, 50+ edge sites, and hundreds or thousands of far edge cloudlets.

  • As a deployer of OpenStack I want to minimize the number of control planes I need to manage across a large geographical region. -- Rather centralize control with the downside of losing functionality on the edge site in case of a connection loss? (Gergely Cs)
  • As a user of OpenStack I want to manage all of my instances in a region (from regional DC to far edge cloudlets) via a single API endpoint.
  • As a user of OpenStack I expect instance autoscale continues to function in an edge site if connectivity is lost to the main datacenter.
    • I think an optional requirement related to this requirement, is the ability for a normal (non-service) user to be able to login to an edge site for operational and configuration purposes, if connectivity is lost to the main datacenter. -- The use case here is perhaps a non-telecom use case ... e.g. a Walmart scenario ... where edge sites (i.e. stores) have lost connectivity to central Walmart site, but still need to be able to manage their local applications running on their local cloud. (Greg W)
  • As a user of OpenStack, if an edge site loses external connectivity, I expect to still be able to manage my deployment within an edge site either via local edge users or via alternate remote access paths.
  • As a deployer of OpenStack I want disk images to be pulled to a cluster on demand, without needing to sync every disk image everywhere.
  • As a user of OpenStack I want persistent storage available within my far edge location, so my instance storage minimizes latency.
  • As a deployer of OpenStack I want to be able to upgrade my edge cloud infrastructure with service continuity. -- It may not be in scope for MVP

Deployment Scenarios

Distributed Control Plane Scenario

Regional Datacenter Distributed Control.png

Centralized Control Plane Scenario

Regional Datacenter Centralized.png

Other edge architecture options