Jump to: navigation, search

Difference between revisions of "Edge Computing Group"

(Next meeting: Tuesday (August 25), 7am PST / 1400 UTC)
(Agenda)
 
(58 intermediate revisions by the same user not shown)
Line 12: Line 12:
 
== Meetings ==
 
== Meetings ==
  
Weekly calls in two time slots:
+
* Mondays at 6am PDT / 1400 UTC
* Tuesdays at 7am PDT / 1500 UTC
+
** '''Calendar file is available [https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/edge/OSF-Edge-Computing-Group-Weekly-Calls2.ics here].'''
* China regional WG calls every Thursday at 0700 UTC
 
  
'''Calendar file is available [https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/edge/OSF-Edge-Computing-Group-Weekly-Calls2.ics here].'''
+
=== Next meeting: Monday (March 01), 6am PST / 1400 UTC ===
 
 
=== Next meeting: Tuesday (September 01), 7am PST / 1400 UTC ===
 
  
 
==== Call details ====  
 
==== Call details ====  
* ''' Zoom link: https://zoom.us/j/879678938 '''
+
* ''' Meetpad link: https://meetpad.opendev.org/osf-edge-computing-group '''
* ''' Dialing in from phone: '''
 
** Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833  or +1 646 876 9923
 
** Meeting ID: 879 678 938
 
** International numbers available: https://zoom.us/u/ed95sU7aQ
 
  
 
=== Action item registry ===
 
=== Action item registry ===
Line 37: Line 30:
 
* Action items
 
* Action items
 
** See Action item registry
 
** See Action item registry
* Interesting sessions from events
+
* "Stand up"
** https://etherpad.opendev.org/p/edge-event-recordings
+
** Quick sync about ongoing activities
* Work items for testing
+
** Events
** Detailed design of the minimal reference architectures
+
*** Upcoming events to attend and prepare for
** Configuration of the minimal reference architectures
+
**** https://etherpad.opendev.org/p/ecg-edge-events-2021
** Draft test plan: https://etherpad.openstack.org/p/ecg-test-plan
+
**** Adrien's group to present on 1st of March
** Lab requirements - http://lists.openstack.org/pipermail/edge-computing/2019-June/000597.html
+
* Inria's presentation about their edge activities
* Hacking days
+
'''Abstract''': Cloud computing applications are often represented as a collection of loosely coupled services encapsulated into sandboxing technologies such as containers. That modularization helps programmers to cope with the complexity of distributed infrastructures. Not only does it make the application easier to understand, but also enable DevOps to build generic features on top of it. The externalization of the application life-cycle management (deployment, monitoring, scaling, etc.) from the business logic is a perfect example.
** etherpad: https://etherpad.openstack.org/p/osf-edge-hacking-days
+
 
** Every Friday, please add your availability to the etherpad if you're available and interested
+
With the arrival of the edge computing paradigm, new challenges arise for applications: How to benefit from geo-distribution? How to deal with inherent constraints such as network latencies and partitioning?
 +
 
 +
The general approach consists in implementing edge-native applications by entangling geo-distribution aspects in the business logic. In addition to complexifying the code, this is in contradiction with the idea of externalizing concerns.
 +
 
 +
We suggest to rely, once again, on modularity of applications to externalize geo-distribution concerns and avoid meddling in the business logic. In this approach, an instance of the application is deployed on each edge location and a generic mechanism enables DevOps to recompose, on demand, the different services of these application instances in order to give the illusion of a single application instance. Our proposal requires some extensions to be complete. However, we present its relevance on a real use-case: OpenStack for the Edge. Thanks to this approach, DevOps can make multiple autonomous instances of OpenStack collaborative to
 +
manage a geo-distributed infrastructure.
 +
* '''2021 activities'''
 +
** https://etherpad.opendev.org/p/edge-computing-group-2021-planning
 +
* Recurring items
 +
** Events
 +
*** Interesting sessions from events
 +
**** https://etherpad.opendev.org/p/edge-event-recordings
 +
** Work items for testing
 +
*** Detailed design of the minimal reference architectures
 +
*** Configuration of the minimal reference architectures
 +
*** Draft test plan: https://etherpad.openstack.org/p/ecg-test-plan
 +
*** Lab requirements - http://lists.openstack.org/pipermail/edge-computing/2019-June/000597.html
 +
** Hacking days
 +
*** etherpad: https://etherpad.openstack.org/p/osf-edge-hacking-days
 +
*** Every Friday, please add your availability to the etherpad if you're available and interested
 
* AoB
 
* AoB
  
Line 53: Line 65:
 
==== Weekly Call Logs ====
 
==== Weekly Call Logs ====
 
https://wiki.openstack.org/wiki/Edge_Computing_Group/Weekly_Call_Logs
 
https://wiki.openstack.org/wiki/Edge_Computing_Group/Weekly_Call_Logs
 +
 +
==== Open Geospatial Consortium presentation, November 16, 2020 ====
 +
 +
* Call recording
 +
** https://zoom.us/rec/share/JiNJGL84oGBh9KRcSRgyHoNV8WxhrBbW0I_dAtE-ietsCm0O260LeTpMWbNEFkMM.n815RJsQATJsAiBF
 +
** Passcode: y5sF37@%
 +
* Slides: https://portal.ogc.org/files/?artifact_id=95522
 +
 +
==== Virtual PTG Recordings, October, 2020 ====
 +
 +
Password: ptg2020!
 +
 +
* Monday (October 26): https://zoom.us/rec/share/Y9GFNd2gxzeGvXTPby7XFBhbNX-uRLxzFsbl3SmPZLXpdvLRF8uzHEg6eFcukcPp.RTU9bg7r5jDIcYGT
 +
* Tuesday (October 27): https://zoom.us/rec/share/seuDC-u95KzdBw0-mPPz7LV20ruiDdNdLYoF3QMwdjcjuAKRWMcggGClkTmvf8U.tKj_HUSkCSVoOO08
 +
* Wednesday (October 28): https://zoom.us/rec/share/9UrZ9JxS9jNR06DGwiV1wnc4aPX5EGYhI_xVCTvglqrj9-i8QajNzJD8kWeU9N0W.h59c74-AQOngXcxG
 +
* PTG notes: https://etherpad.opendev.org/p/ecg-vptg-october-2020
  
 
==== Virtual PTG Recordings, June, 2020 ====
 
==== Virtual PTG Recordings, June, 2020 ====

Latest revision as of 08:50, 23 February 2021

Mission Statement

  • This OSF Edge Computing Group’s objective is to define infrastructure systems needed to support applications distributed over a broad geographic area, with potentially thousands of sites, located as close as possible to discrete data sources, physical elements or end users. The assumption is that network connectivity is over a WAN.
  • The OSF Edge Computing Group will identify use cases, develop requirements, and produce viable architecture options and tests for evaluating new and existing solutions, across different industries and global constituencies, to enable development activities for Open Infrastructure and other Open Source community projects to support edge use cases.

Group Resources

Meetings

  • Mondays at 6am PDT / 1400 UTC
    • Calendar file is available here.

Next meeting: Monday (March 01), 6am PST / 1400 UTC

Call details

Action item registry

Agenda

Please feel free to add your topic to the agenda. Please add your name as well so we know on the meeting who to ping.

  • Action items
    • See Action item registry
  • "Stand up"
  • Inria's presentation about their edge activities

Abstract: Cloud computing applications are often represented as a collection of loosely coupled services encapsulated into sandboxing technologies such as containers. That modularization helps programmers to cope with the complexity of distributed infrastructures. Not only does it make the application easier to understand, but also enable DevOps to build generic features on top of it. The externalization of the application life-cycle management (deployment, monitoring, scaling, etc.) from the business logic is a perfect example.

With the arrival of the edge computing paradigm, new challenges arise for applications: How to benefit from geo-distribution? How to deal with inherent constraints such as network latencies and partitioning?

The general approach consists in implementing edge-native applications by entangling geo-distribution aspects in the business logic. In addition to complexifying the code, this is in contradiction with the idea of externalizing concerns.

We suggest to rely, once again, on modularity of applications to externalize geo-distribution concerns and avoid meddling in the business logic. In this approach, an instance of the application is deployed on each edge location and a generic mechanism enables DevOps to recompose, on demand, the different services of these application instances in order to give the illusion of a single application instance. Our proposal requires some extensions to be complete. However, we present its relevance on a real use-case: OpenStack for the Edge. Thanks to this approach, DevOps can make multiple autonomous instances of OpenStack collaborative to manage a geo-distributed infrastructure.

Meeting Logs

Weekly Call Logs

https://wiki.openstack.org/wiki/Edge_Computing_Group/Weekly_Call_Logs

Open Geospatial Consortium presentation, November 16, 2020

Virtual PTG Recordings, October, 2020

Password: ptg2020!

Virtual PTG Recordings, June, 2020

Working Group Activities

Use cases

Minimal Reference Architectures

https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures

https://wiki.openstack.org/wiki/Edge_Computing_Group/Architecture_Implementations_With_Kubernetes

https://wiki.openstack.org/wiki/Edge_Computing_Group/Hybrid_Architecture_Implementations

OpenStack Activities

https://wiki.openstack.org/wiki/Edge_Computing_Group/OpenStack_Edge_Activities

StarlingX Activities

Adjacent Projects and Communities

https://wiki.openstack.org/wiki/Edge_Computing_Group/Adjacent_Edge_Projects

Challenges

  • Life-cycle Management. A virtual-machine/container/bare-metal manager in charge of managing machine/container lifecycle (configuration, scheduling, deployment, suspend/resume, and shutdown). (Current Projects: TK)
  • Image Management. An image manager in charge of template files (a.k.a. virtual-machine/container images). (Current Projects: TK)
  • Network Management. A network manager in charge of providing connectivity to the infrastructure: virtual networks and external access for users. (Current Projects: TK)
  • Storage Management. A storage manager, providing storage services to edge applications. (Current Projects: TK)
  • Administrative. Administrative tools, providing user interfaces to operate and use the dispersed infrastructure. (Current Projects: TK)
  • Storage latency. Addressing storage latency over WAN connections.
  • Reinforced security at the edge. Monitoring the physical and application integrity of each site, with the ability to autonomously enable corrective actions when necessary.
  • Resource utilization monitoring. Monitor resource utilization across all nodes simultaneously.
  • Orchestration tools. Manage and coordinate many edge sites and workloads, potentially leading toward a peering control plane or “selforganizing edge.”
  • Federation of edge platforms orchestration (or cloud-of-clouds). Must be explored and introduced to the IaaS core services.
  • Automated edge commission/decommission operations. Includes initial software deployment and upgrades of the resource management system’s components.
  • Automated data and workload relocations. Load balancing across geographically distributed hardware.
  • Synchronization of abstract state propagation Needed at the “core” of the infrastructure to cope with discontinuous network links.
  • Network partitioning with limited connectivity New ways to deal with network partitioning issues due to limited connectivity—coping with short disconnections and long disconnections alike.
  • Manage application latency requirements. The definition of advanced placement constraints in order to cope with latency requirements of application components.
  • Application provisioning and scheduling. In order to satisfy placement requirements (initial placement).
  • Data and workload relocations. According to internal/external events (mobility use-cases, failures, performance considerations, and so forth).
  • Integration location awareness. Not all edge deployments will require the same application at the same moment. Location and demand awareness are a likely need.
  • Dynamic rebalancing of resources from remote sites. Discrete hardware with limited resources and limited ability to expand at the remote site needs to be taken into consideration when designing both the overall architecture at the macro level and the administrative tools. The concept of being able to grab remote resources on demand from other sites, either neighbors over a mesh network or from core elements in a hierarchical network, means that fluctuations in local demand can be met without inefficiency in hardware deployments.