Jump to: navigation, search

Difference between revisions of "Edge Computing Group"

(Next meeting: Monday (February 22), 6am PST / 1400 UTC)
(Agenda)
Line 36: Line 36:
 
**** https://etherpad.opendev.org/p/ecg-edge-events-2021
 
**** https://etherpad.opendev.org/p/ecg-edge-events-2021
 
**** Adrien's group to present on 1st of March
 
**** Adrien's group to present on 1st of March
* Shaken Fist - an alternative open source IaaS compute implementation with a focus on size and speed
+
* Inria's presentation about their edge activities
** FOSDEM talk - https://www.youtube.com/watch?v=PwAnn2BVX84
+
'''Abstract''': Cloud computing applications are often represented as a collection of loosely coupled services encapsulated into sandboxing technologies such as containers. That modularization helps programmers to cope with the complexity of distributed infrastructures. Not only does it make the application easier to understand, but also enable DevOps to build generic features on top of it. The externalization of the application life-cycle management (deployment, monitoring, scaling, etc.) from the business logic is a perfect example.
** A short getting started tutorial - https://www.youtube.com/watch?v=Sfi5vP5A6wM
+
 
* Reference Architectures discussion
+
With the arrival of the edge computing paradigm, new challenges arise for applications: How to benefit from geo-distribution? How to deal with inherent constraints such as network latencies and partitioning?
** Containers + hybrid models - https://etherpad.opendev.org/p/hybrid-acchitecture-options
+
 
*** Characteristics
+
The general approach consists in implementing edge-native applications by entangling geo-distribution aspects in the business logic. In addition to complexifying the code, this is in contradiction with the idea of externalizing concerns.
*** Pros and Cons
+
 
 +
We suggest to rely, once again, on modularity of applications to externalize geo-distribution concerns and avoid meddling in the business logic. In this approach, an instance of the application is deployed on each edge location and a generic mechanism enables DevOps to recompose, on demand, the different services of these application instances in order to give the illusion of a single application instance. Our proposal requires some extensions to be complete. However, we present its relevance on a real use-case: OpenStack for the Edge. Thanks to this approach, DevOps can make multiple autonomous instances of OpenStack collaborative to
 +
manage a geo-distributed infrastructure.
 
* '''2021 activities'''
 
* '''2021 activities'''
 
** https://etherpad.opendev.org/p/edge-computing-group-2021-planning
 
** https://etherpad.opendev.org/p/edge-computing-group-2021-planning

Revision as of 08:50, 23 February 2021

Mission Statement

  • This OSF Edge Computing Group’s objective is to define infrastructure systems needed to support applications distributed over a broad geographic area, with potentially thousands of sites, located as close as possible to discrete data sources, physical elements or end users. The assumption is that network connectivity is over a WAN.
  • The OSF Edge Computing Group will identify use cases, develop requirements, and produce viable architecture options and tests for evaluating new and existing solutions, across different industries and global constituencies, to enable development activities for Open Infrastructure and other Open Source community projects to support edge use cases.

Group Resources

Meetings

  • Mondays at 6am PDT / 1400 UTC
    • Calendar file is available here.

Next meeting: Monday (March 01), 6am PST / 1400 UTC

Call details

Action item registry

Agenda

Please feel free to add your topic to the agenda. Please add your name as well so we know on the meeting who to ping.

  • Action items
    • See Action item registry
  • "Stand up"
  • Inria's presentation about their edge activities

Abstract: Cloud computing applications are often represented as a collection of loosely coupled services encapsulated into sandboxing technologies such as containers. That modularization helps programmers to cope with the complexity of distributed infrastructures. Not only does it make the application easier to understand, but also enable DevOps to build generic features on top of it. The externalization of the application life-cycle management (deployment, monitoring, scaling, etc.) from the business logic is a perfect example.

With the arrival of the edge computing paradigm, new challenges arise for applications: How to benefit from geo-distribution? How to deal with inherent constraints such as network latencies and partitioning?

The general approach consists in implementing edge-native applications by entangling geo-distribution aspects in the business logic. In addition to complexifying the code, this is in contradiction with the idea of externalizing concerns.

We suggest to rely, once again, on modularity of applications to externalize geo-distribution concerns and avoid meddling in the business logic. In this approach, an instance of the application is deployed on each edge location and a generic mechanism enables DevOps to recompose, on demand, the different services of these application instances in order to give the illusion of a single application instance. Our proposal requires some extensions to be complete. However, we present its relevance on a real use-case: OpenStack for the Edge. Thanks to this approach, DevOps can make multiple autonomous instances of OpenStack collaborative to manage a geo-distributed infrastructure.

Meeting Logs

Weekly Call Logs

https://wiki.openstack.org/wiki/Edge_Computing_Group/Weekly_Call_Logs

Open Geospatial Consortium presentation, November 16, 2020

Virtual PTG Recordings, October, 2020

Password: ptg2020!

Virtual PTG Recordings, June, 2020

Working Group Activities

Use cases

Minimal Reference Architectures

https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures

https://wiki.openstack.org/wiki/Edge_Computing_Group/Architecture_Implementations_With_Kubernetes

https://wiki.openstack.org/wiki/Edge_Computing_Group/Hybrid_Architecture_Implementations

OpenStack Activities

https://wiki.openstack.org/wiki/Edge_Computing_Group/OpenStack_Edge_Activities

StarlingX Activities

Adjacent Projects and Communities

https://wiki.openstack.org/wiki/Edge_Computing_Group/Adjacent_Edge_Projects

Challenges

  • Life-cycle Management. A virtual-machine/container/bare-metal manager in charge of managing machine/container lifecycle (configuration, scheduling, deployment, suspend/resume, and shutdown). (Current Projects: TK)
  • Image Management. An image manager in charge of template files (a.k.a. virtual-machine/container images). (Current Projects: TK)
  • Network Management. A network manager in charge of providing connectivity to the infrastructure: virtual networks and external access for users. (Current Projects: TK)
  • Storage Management. A storage manager, providing storage services to edge applications. (Current Projects: TK)
  • Administrative. Administrative tools, providing user interfaces to operate and use the dispersed infrastructure. (Current Projects: TK)
  • Storage latency. Addressing storage latency over WAN connections.
  • Reinforced security at the edge. Monitoring the physical and application integrity of each site, with the ability to autonomously enable corrective actions when necessary.
  • Resource utilization monitoring. Monitor resource utilization across all nodes simultaneously.
  • Orchestration tools. Manage and coordinate many edge sites and workloads, potentially leading toward a peering control plane or “selforganizing edge.”
  • Federation of edge platforms orchestration (or cloud-of-clouds). Must be explored and introduced to the IaaS core services.
  • Automated edge commission/decommission operations. Includes initial software deployment and upgrades of the resource management system’s components.
  • Automated data and workload relocations. Load balancing across geographically distributed hardware.
  • Synchronization of abstract state propagation Needed at the “core” of the infrastructure to cope with discontinuous network links.
  • Network partitioning with limited connectivity New ways to deal with network partitioning issues due to limited connectivity—coping with short disconnections and long disconnections alike.
  • Manage application latency requirements. The definition of advanced placement constraints in order to cope with latency requirements of application components.
  • Application provisioning and scheduling. In order to satisfy placement requirements (initial placement).
  • Data and workload relocations. According to internal/external events (mobility use-cases, failures, performance considerations, and so forth).
  • Integration location awareness. Not all edge deployments will require the same application at the same moment. Location and demand awareness are a likely need.
  • Dynamic rebalancing of resources from remote sites. Discrete hardware with limited resources and limited ability to expand at the remote site needs to be taken into consideration when designing both the overall architecture at the macro level and the administrative tools. The concept of being able to grab remote resources on demand from other sites, either neighbors over a mesh network or from core elements in a hierarchical network, means that fluctuations in local demand can be met without inefficiency in hardware deployments.