Jump to: navigation, search

Difference between revisions of "Edge Computing Group"

(Agenda)
(Agenda)
(33 intermediate revisions by 2 users not shown)
Line 13: Line 13:
  
 
Weekly calls in two time slots:
 
Weekly calls in two time slots:
* Tuesdays at 7am PDT / 1500 UTC
+
* Mondays at 6am PDT / 1300 UTC
* China regional WG calls every Thursday at 0700 UTC
+
** '''Calendar file is available [https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/edge/OSF-Edge-Computing-Group-Weekly-Calls2.ics here].'''
  
'''Calendar file is available [https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/edge/OSF-Edge-Computing-Group-Weekly-Calls2.ics here].'''
+
=== Next meeting: Monday (November 03), 6am PST / 1300 UTC ===
 
 
=== Next meeting: Tuesday (July 07), 7am PST / 1400 UTC ===
 
  
 
==== Call details ====  
 
==== Call details ====  
* ''' Zoom link: https://zoom.us/j/879678938 '''
+
* ''' Meetpad link: https://meetpad.opendev.org/osf-edge-computing-group '''
* ''' Dialing in from phone: '''
 
** Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833  or +1 646 876 9923
 
** Meeting ID: 879 678 938
 
** International numbers available: https://zoom.us/u/ed95sU7aQ
 
  
 
=== Action item registry ===
 
=== Action item registry ===
  
* Ildiko to check on this Glance review: https://review.openstack.org/#/c/619638/ - asked Greg about the plans
+
*
  
 
=== Agenda ===
 
=== Agenda ===
Line 37: Line 31:
 
* Action items
 
* Action items
 
** See Action item registry
 
** See Action item registry
* Event recap
+
* PTG retrospective
** LFN DDF
+
** Agenda - https://etherpad.opendev.org/p/ecg-vptg-october-2020
** OpenDev - Large-scale deployments
+
* Recurring items
* Work items for testing
+
** Interesting sessions from events
** Detailed design of the minimal reference architectures
+
*** https://etherpad.opendev.org/p/edge-event-recordings
** Configuration of the minimal reference architectures
+
** Work items for testing
** Draft test plan: https://etherpad.openstack.org/p/ecg-test-plan
+
*** Detailed design of the minimal reference architectures
** Lab requirements - http://lists.openstack.org/pipermail/edge-computing/2019-June/000597.html
+
*** Configuration of the minimal reference architectures
* Hacking days
+
*** Draft test plan: https://etherpad.openstack.org/p/ecg-test-plan
** etherpad: https://etherpad.openstack.org/p/osf-edge-hacking-days
+
*** Lab requirements - http://lists.openstack.org/pipermail/edge-computing/2019-June/000597.html
** Every Friday, please add your availability to the etherpad if you're available and interested
+
** Hacking days
 +
*** etherpad: https://etherpad.openstack.org/p/osf-edge-hacking-days
 +
*** Every Friday, please add your availability to the etherpad if you're available and interested
 
* AoB
 
* AoB
  
Line 76: Line 72:
  
 
https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures
 
https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures
 +
 +
https://wiki.openstack.org/wiki/Edge_Computing_Group/Architecture_Implementations_With_Kubernetes
 +
 +
https://wiki.openstack.org/wiki/Edge_Computing_Group/Hybrid_Architecture_Implementations
  
 
=== OpenStack Activities ===
 
=== OpenStack Activities ===

Revision as of 08:37, 13 October 2020

Mission Statement

  • This OSF Edge Computing Group’s objective is to define infrastructure systems needed to support applications distributed over a broad geographic area, with potentially thousands of sites, located as close as possible to discrete data sources, physical elements or end users. The assumption is that network connectivity is over a WAN.
  • The OSF Edge Computing Group will identify use cases, develop requirements, and produce viable architecture options and tests for evaluating new and existing solutions, across different industries and global constituencies, to enable development activities for Open Infrastructure and other Open Source community projects to support edge use cases.

Group Resources

Meetings

Weekly calls in two time slots:

  • Mondays at 6am PDT / 1300 UTC
    • Calendar file is available here.

Next meeting: Monday (November 03), 6am PST / 1300 UTC

Call details

Action item registry

Agenda

Please feel free to add your topic to the agenda. Please add your name as well so we know on the meeting who to ping.

Meeting Logs

Weekly Call Logs

https://wiki.openstack.org/wiki/Edge_Computing_Group/Weekly_Call_Logs

Virtual PTG Recordings, June, 2020

Working Group Activities

Use cases

Minimal Reference Architectures

https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures

https://wiki.openstack.org/wiki/Edge_Computing_Group/Architecture_Implementations_With_Kubernetes

https://wiki.openstack.org/wiki/Edge_Computing_Group/Hybrid_Architecture_Implementations

OpenStack Activities

https://wiki.openstack.org/wiki/Edge_Computing_Group/OpenStack_Edge_Activities

StarlingX Activities

Adjacent Projects and Communities

https://wiki.openstack.org/wiki/Edge_Computing_Group/Adjacent_Edge_Projects

Challenges

  • Life-cycle Management. A virtual-machine/container/bare-metal manager in charge of managing machine/container lifecycle (configuration, scheduling, deployment, suspend/resume, and shutdown). (Current Projects: TK)
  • Image Management. An image manager in charge of template files (a.k.a. virtual-machine/container images). (Current Projects: TK)
  • Network Management. A network manager in charge of providing connectivity to the infrastructure: virtual networks and external access for users. (Current Projects: TK)
  • Storage Management. A storage manager, providing storage services to edge applications. (Current Projects: TK)
  • Administrative. Administrative tools, providing user interfaces to operate and use the dispersed infrastructure. (Current Projects: TK)
  • Storage latency. Addressing storage latency over WAN connections.
  • Reinforced security at the edge. Monitoring the physical and application integrity of each site, with the ability to autonomously enable corrective actions when necessary.
  • Resource utilization monitoring. Monitor resource utilization across all nodes simultaneously.
  • Orchestration tools. Manage and coordinate many edge sites and workloads, potentially leading toward a peering control plane or “selforganizing edge.”
  • Federation of edge platforms orchestration (or cloud-of-clouds). Must be explored and introduced to the IaaS core services.
  • Automated edge commission/decommission operations. Includes initial software deployment and upgrades of the resource management system’s components.
  • Automated data and workload relocations. Load balancing across geographically distributed hardware.
  • Synchronization of abstract state propagation Needed at the “core” of the infrastructure to cope with discontinuous network links.
  • Network partitioning with limited connectivity New ways to deal with network partitioning issues due to limited connectivity—coping with short disconnections and long disconnections alike.
  • Manage application latency requirements. The definition of advanced placement constraints in order to cope with latency requirements of application components.
  • Application provisioning and scheduling. In order to satisfy placement requirements (initial placement).
  • Data and workload relocations. According to internal/external events (mobility use-cases, failures, performance considerations, and so forth).
  • Integration location awareness. Not all edge deployments will require the same application at the same moment. Location and demand awareness are a likely need.
  • Dynamic rebalancing of resources from remote sites. Discrete hardware with limited resources and limited ability to expand at the remote site needs to be taken into consideration when designing both the overall architecture at the macro level and the administrative tools. The concept of being able to grab remote resources on demand from other sites, either neighbors over a mesh network or from core elements in a hierarchical network, means that fluctuations in local demand can be met without inefficiency in hardware deployments.