Edge Computing Group
- 1 Mission Statement
- 2 Group Resources
- 3 Meetings
- 3.1 Next meeting: Monday (December 13), 6am PST / 1400 UTC
- 3.2 Action item registry
- 3.3 Agenda
- 3.4 Upcoming Topics
- 3.5 Meeting Logs
- 3.5.1 Etherpads
- 3.5.2 The Industry IoT Consortium (IIC) Edge Computing Efforts presentation by Chuck Byers
- 3.5.3 Networking and IPv6 discussion with Ed Horley
- 3.5.4 Networking and DNS discussion with Cricket Liu and Andrew Wertkin
- 3.5.5 Smart Edge presentation and discussion with Neal Oliver, November 15, 2021
- 3.5.6 Digital Rebar presentation and discussion with Rob Hirschfeld, November 8, 2021
- 3.5.7 Virtual PTG Recordings, October, 2021
- 3.5.8 CHI@Edge - recording of the session with the Chameleon project, September 13, 2021
- 3.5.9 Virtual PTG Recordings, April, 2021
- 3.5.10 Open Geospatial Consortium presentation, November 16, 2020
- 3.5.11 Virtual PTG Recordings, October, 2020
- 3.5.12 Virtual PTG Recordings, June, 2020
- 3.5.13 Archive - Weekly Call Logs
- 4 Working Group Activities
- 5 Adjacent Projects and Communities
- 6 Challenges
- This OSF Edge Computing Group’s objective is to define infrastructure systems needed to support applications distributed over a broad geographic area, with potentially thousands of sites, located as close as possible to discrete data sources, physical elements or end users. The assumption is that network connectivity is over a WAN.
- The OSF Edge Computing Group will identify use cases, develop requirements, and produce viable architecture options and tests for evaluating new and existing solutions, across different industries and global constituencies, to enable development activities for Open Infrastructure and other Open Source community projects to support edge use cases.
- Edge Computing Web Page - https://www.openstack.org/edge-computing/
- IRC Channel on Freenode - #edge-computing-group
- IRC Channel Logs: http://eavesdrop.openstack.org/irclogs/%23edge-computing-group/
- Mailing list - http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing
- Mondays at 6am PDT / 1300 UTC
- Calendar file is available here.
Next meeting: Monday (December 13), 6am PST / 1400 UTC
- Join at: https://zoom.us/j/5495195296?pwd=L3NycXhBRys3UEpOc2JzZjZuM25JUT09
- Meeting ID: 549 519 5296
- Passcode: openstack
- One tap mobile
- +12532158782,,5495195296# US (Tacoma)
- +13462487799,,5495195296# US (Houston)
- Dial by your location
- +1 253 215 8782 US (Tacoma)
- +1 346 248 7799 US (Houston)
- +1 669 900 6833 US (San Jose)
- +1 646 876 9923 US (New York)
- +1 301 715 8592 US (Washington DC)
- +1 312 626 6799 US (Chicago)
- Meeting ID: 549 519 5296
- Find your local number: https://zoom.us/u/ad8zlEN7JW
Action item registry
Please feel free to add your topic to the agenda. Please add your name as well so we know on the meeting who to ping.
- Action items
- See Action item registry
- Tangled up in Edge Series - 2022 Planning and Predictions
- December 13th at 6am Pacific / 1400 UTC
- The Industry IoT Consortium (IIC) Edge Computing Efforts by Chuck Byers - Associate CTO - Industry IoT Consortium (Formerly Industrial Internet Consortium)
- IIC has been working on edge computing for about five years. When the OpenFog Consortium joined IIC in 2019, our edge efforts were strengthened. We have worked on requirements, reference architectures, standards, best practices, testbeds, product catalogues, presentations and publications regarding edge computing. In this session, we will outline IIC's edge computing efforts, the work of our Edge Computing Task Group, and the many publications and industry best practices we have delivered. We will also describe some of our ongoing liaison work, and several whitepapers on edge computing we currently have in production.
- December 20th at 6am Pacific / 1400 UTC
- 2022-2023 Predictions
- December 27th
- Christmas and New Years holiday break, meeting will be cancelled
- January 3rd at 6am Pacific / 1400 UTC
- 2021 retrospective and 2022 planning
- Tangled up in Edge
The Industry IoT Consortium (IIC) Edge Computing Efforts presentation by Chuck Byers
- Passcode: F=4.U.fA
Networking and IPv6 discussion with Ed Horley
- Passcode: QN=z7J*y
Networking and DNS discussion with Cricket Liu and Andrew Wertkin
- Passcode: ccX@PB7M
Smart Edge presentation and discussion with Neal Oliver, November 15, 2021
- Passcode: L.7V#RN8
Digital Rebar presentation and discussion with Rob Hirschfeld, November 8, 2021
- Passcode: 5ippfUp.
Virtual PTG Recordings, October, 2021
- Monday (October 18): https://zoom.us/rec/share/m2byTydgru8EoJEvha_XKuRz3-pSCKonnjx3aK6vwPGn8a-ZDqcI-WfENyrUDS7V.RehroG2ScY_lG1MK
- Passcode: d+Eb7d$8
- Tuesday (October 19): https://zoom.us/rec/share/Ar-mHC0De8MnFAkjMNDiGXWWWtGxxL-nxcuvEM6TIxaxAfLuG9DO9myRVw-yGRWX.SAiqAX4WcEai3iRY
- Passcode: 5%W.&V%*
- Wednesday (October 20): https://zoom.us/rec/share/AO3Fg8o8h_GVvon_v9KoKkrUwIJxjcElxBrPFyAohZBooK_HtG0P6HyBlZHXSvlz.S3Td71kBz64rYnjO
- Passcode: KEc%%p7g
- etherpad: https://etherpad.opendev.org/p/ecg-ptg-october-2021
CHI@Edge - recording of the session with the Chameleon project, September 13, 2021
- Passcode: HW^55Kc+
Virtual PTG Recordings, April, 2021
- Monday (April 19): https://zoom.us/rec/share/n9O0Se1JScNSLntUBvy0MtXRgdJGotz8GOP-zWvM3aLQvv6zIkjqYXM93HUzxEDx.D7gc0jc8SEYgqkXN
- Tuesday (April 20): https://zoom.us/rec/share/yMN8QANWzI7FJ_Hh-ys166_sECX2XgSgX-B_jKnnlYh4UjCAv9S32s-9U4mzjUGf.HZIErlBO6LOIMrJz
- Wednesday (April 21): https://zoom.us/rec/share/HVUp0n99jAJwlrD2zcoNxji_ZLWOBC8VsYsEP6h32Xrorxtqo20HsIdL9WL08Eoy.ese3KUH2Fq-HXm3q
Open Geospatial Consortium presentation, November 16, 2020
- Call recording
- Slides: https://portal.ogc.org/files/?artifact_id=95522
Virtual PTG Recordings, October, 2020
- Monday (October 26): https://zoom.us/rec/share/Y9GFNd2gxzeGvXTPby7XFBhbNX-uRLxzFsbl3SmPZLXpdvLRF8uzHEg6eFcukcPp.RTU9bg7r5jDIcYGT
- Tuesday (October 27): https://zoom.us/rec/share/seuDC-u95KzdBw0-mPPz7LV20ruiDdNdLYoF3QMwdjcjuAKRWMcggGClkTmvf8U.tKj_HUSkCSVoOO08
- Wednesday (October 28): https://zoom.us/rec/share/9UrZ9JxS9jNR06DGwiV1wnc4aPX5EGYhI_xVCTvglqrj9-i8QajNzJD8kWeU9N0W.h59c74-AQOngXcxG
- PTG notes: https://etherpad.opendev.org/p/ecg-vptg-october-2020
Virtual PTG Recordings, June, 2020
- Password: 1H!2?7%u
- Password: 3y*99q.6
- Password: 8H?A7OC2
Archive - Weekly Call Logs
Working Group Activities
- Upcoming events to attend and prepare for
- Interesting sessions from events
- Liaison: Ildiko Vancsa
- Meeting logs
Minimal Reference Architectures
Work items for testing
- Detailed design of the minimal reference architectures
- Configuration of the minimal reference architectures
- Draft test plan: https://etherpad.openstack.org/p/ecg-test-plan
- Lab requirements - http://lists.openstack.org/pipermail/edge-computing/2019-June/000597.html
- etherpad: https://etherpad.openstack.org/p/osf-edge-hacking-days
- Every Friday, please add your availability to the etherpad if you're available and interested
- Distributed Cloud (Incubation Project)
- Resource Synchronization and Quota Management Framework
- Storyboard Story: https://storyboard.openstack.org/#!/story/2002842
- Updated Gerrit Code Reviews:
Adjacent Projects and Communities
- Life-cycle Management. A virtual-machine/container/bare-metal manager in charge of managing machine/container lifecycle (configuration, scheduling, deployment, suspend/resume, and shutdown). (Current Projects: TK)
- Image Management. An image manager in charge of template files (a.k.a. virtual-machine/container images). (Current Projects: TK)
- Network Management. A network manager in charge of providing connectivity to the infrastructure: virtual networks and external access for users. (Current Projects: TK)
- Storage Management. A storage manager, providing storage services to edge applications. (Current Projects: TK)
- Administrative. Administrative tools, providing user interfaces to operate and use the dispersed infrastructure. (Current Projects: TK)
- Storage latency. Addressing storage latency over WAN connections.
- Reinforced security at the edge. Monitoring the physical and application integrity of each site, with the ability to autonomously enable corrective actions when necessary.
- Resource utilization monitoring. Monitor resource utilization across all nodes simultaneously.
- Orchestration tools. Manage and coordinate many edge sites and workloads, potentially leading toward a peering control plane or “selforganizing edge.”
- Federation of edge platforms orchestration (or cloud-of-clouds). Must be explored and introduced to the IaaS core services.
- Automated edge commission/decommission operations. Includes initial software deployment and upgrades of the resource management system’s components.
- Automated data and workload relocations. Load balancing across geographically distributed hardware.
- Synchronization of abstract state propagation Needed at the “core” of the infrastructure to cope with discontinuous network links.
- Network partitioning with limited connectivity New ways to deal with network partitioning issues due to limited connectivity—coping with short disconnections and long disconnections alike.
- Manage application latency requirements. The definition of advanced placement constraints in order to cope with latency requirements of application components.
- Application provisioning and scheduling. In order to satisfy placement requirements (initial placement).
- Data and workload relocations. According to internal/external events (mobility use-cases, failures, performance considerations, and so forth).
- Integration location awareness. Not all edge deployments will require the same application at the same moment. Location and demand awareness are a likely need.
- Dynamic rebalancing of resources from remote sites. Discrete hardware with limited resources and limited ability to expand at the remote site needs to be taken into consideration when designing both the overall architecture at the macro level and the administrative tools. The concept of being able to grab remote resources on demand from other sites, either neighbors over a mesh network or from core elements in a hierarchical network, means that fluctuations in local demand can be met without inefficiency in hardware deployments.